query_id
stringlengths 1
6
| query
stringlengths 2
185
| positive_passages
listlengths 1
121
| negative_passages
listlengths 15
100
|
---|---|---|---|
1840413 | A Sentence Simplification System for Improving Relation Extraction | [
{
"docid": "pos:1840413_0",
"text": "A large number of Open Relation Extraction approaches have been proposed recently, covering a wide range of NLP machinery, from “shallow” (e.g., part-of-speech tagging) to “deep” (e.g., semantic role labeling–SRL). A natural question then is what is the tradeoff between NLP depth (and associated computational cost) versus effectiveness. This paper presents a fair and objective experimental comparison of 8 state-of-the-art approaches over 5 different datasets, and sheds some light on the issue. The paper also describes a novel method, EXEMPLAR, which adapts ideas from SRL to less costly NLP machinery, resulting in substantial gains both in efficiency and effectiveness, over binary and n-ary relation extraction tasks.",
"title": ""
},
{
"docid": "pos:1840413_1",
"text": "We study the problem of automatically extracting information networks formed by recognizable entities as well as relations among them from social media sites. Our approach consists of using state-of-the-art natural language processing tools to identify entities and extract sentences that relate such entities, followed by using text-clustering algorithms to identify the relations within the information network. We propose a new term-weighting scheme that significantly improves on the state-of-the-art in the task of relation extraction, both when used in conjunction with the standard tf ċ idf scheme and also when used as a pruning filter. We describe an effective method for identifying benchmarks for open information extraction that relies on a curated online database that is comparable to the hand-crafted evaluation datasets in the literature. From this benchmark, we derive a much larger dataset which mimics realistic conditions for the task of open information extraction. We report on extensive experiments on both datasets, which not only shed light on the accuracy levels achieved by state-of-the-art open information extraction tools, but also on how to tune such tools for better results.",
"title": ""
},
{
"docid": "pos:1840413_2",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
},
{
"docid": "pos:1840413_3",
"text": "Traditional relation extraction seeks to identify pre-specified semantic relations within natural language text, while open Information Extraction (Open IE) takes a more general approach, and looks for a variety of relations without restriction to a fixed relation set. With this generalization comes the question, what is a relation? For example, should the more general task be restricted to relations mediated by verbs, nouns, or both? To help answer this question, we propose two levels of subtasks for Open IE. One task is to determine if a sentence potentially contains a relation between two entities? The other task looks to confirm explicit relation words for two entities. We propose multiple SVM models with dependency tree kernels for both tasks. For explicit relation extraction, our system can extract both noun and verb relations. Our results on three datasets show that our system is superior when compared to state-of-the-art systems like REVERB and OLLIE for both tasks. For example, in some experiments our system achieves 33% improvement on nominal relation extraction over OLLIE. In addition we propose an unsupervised rule-based approach which can serve as a strong baseline for Open IE systems.",
"title": ""
},
{
"docid": "pos:1840413_4",
"text": "This paper presents PATTY: a large resource for textual patterns that denote binary relations between entities. The patterns are semantically typed and organized into a subsumption taxonomy. The PATTY system is based on efficient algorithms for frequent itemset mining and can process Web-scale corpora. It harnesses the rich type system and entity population of large knowledge bases. The PATTY taxonomy comprises 350,569 pattern synsets. Random-sampling-based evaluation shows a pattern accuracy of 84.7%. PATTY has 8,162 subsumptions, with a random-sampling-based precision of 75%. The PATTY resource is freely available for interactive access and download.",
"title": ""
}
] | [
{
"docid": "neg:1840413_0",
"text": "Ten years on from a review in the twentieth issue of this journal, this contribution assess the direction research in the field of glucose sensing for diabetes is headed and various technologies to be seen in the future. The emphasis of this review was placed on the home blood glucose testing market. After an introduction to diabetes and glucose sensing, this review analyses state of the art and pipeline devices; in particular their user friendliness and technological advancement. This review complements conventional reviews based on scholarly published papers in journals.",
"title": ""
},
{
"docid": "neg:1840413_1",
"text": "Microplastics are present throughout the marine environment and ingestion of these plastic particles (<1 mm) has been demonstrated in a laboratory setting for a wide array of marine organisms. Here, we investigate the presence of microplastics in two species of commercially grown bivalves: Mytilus edulis and Crassostrea gigas. Microplastics were recovered from the soft tissues of both species. At time of human consumption, M. edulis contains on average 0.36 ± 0.07 particles g(-1) (wet weight), while a plastic load of 0.47 ± 0.16 particles g(-1) ww was detected in C. gigas. As a result, the annual dietary exposure for European shellfish consumers can amount to 11,000 microplastics per year. The presence of marine microplastics in seafood could pose a threat to food safety, however, due to the complexity of estimating microplastic toxicity, estimations of the potential risks for human health posed by microplastics in food stuffs is not (yet) possible.",
"title": ""
},
{
"docid": "neg:1840413_2",
"text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.",
"title": ""
},
{
"docid": "neg:1840413_3",
"text": "INTRODUCTION\nFluorescence anisotropy (FA) is one of the major established methods accepted by industry and regulatory agencies for understanding the mechanisms of drug action and selecting drug candidates utilizing a high-throughput format.\n\n\nAREAS COVERED\nThis review covers the basics of FA and complementary methods, such as fluorescence lifetime anisotropy and their roles in the drug discovery process. The authors highlight the factors affecting FA readouts, fluorophore selection and instrumentation. Furthermore, the authors describe the recent development of a successful, commercially valuable FA assay for long QT syndrome drug toxicity to illustrate the role that FA can play in the early stages of drug discovery.\n\n\nEXPERT OPINION\nDespite the success in drug discovery, the FA-based technique experiences competitive pressure from other homogeneous assays. That being said, FA is an established yet rapidly developing technique, recognized by academic institutions, the pharmaceutical industry and regulatory agencies across the globe. The technical problems encountered in working with small molecules in homogeneous assays are largely solved, and new challenges come from more complex biological molecules and nanoparticles. With that, FA will remain one of the major work-horse techniques leading to precision (personalized) medicine.",
"title": ""
},
{
"docid": "neg:1840413_4",
"text": "Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as \"3V\") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, \"veracity,\" in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.",
"title": ""
},
{
"docid": "neg:1840413_5",
"text": "INTRODUCTION\nLiver disease is the third most common cause of premature mortality in the UK. Liver failure accelerates frailty, resulting in skeletal muscle atrophy, functional decline and an associated risk of liver transplant waiting list mortality. However, there is limited research investigating the impact of exercise on patient outcomes pre and post liver transplantation. The waitlist period for patients listed for liver transplantation provides a unique opportunity to provide and assess interventions such as prehabilitation.\n\n\nMETHODS AND ANALYSIS\nThis study is a phase I observational study evaluating the feasibility of conducting a randomised control trial (RCT) investigating the use of a home-based exercise programme (HBEP) in the management of patients awaiting liver transplantation. Twenty eligible patients will be randomly selected from the Queen Elizabeth University Hospital Birmingham liver transplant waiting list. Participants will be provided with an individually tailored 12-week HBEP, including step targets and resistance exercises. Activity trackers and patient diaries will be provided to support data collection. For the initial 6 weeks, telephone support will be given to discuss compliance with the study intervention, achievement of weekly targets, and to address any queries or concerns regarding the intervention. During weeks 6-12, participants will continue the intervention without telephone support to evaluate longer term adherence to the study intervention. On completing the intervention, all participants will be invited to engage in a focus group to discuss their experiences and the feasibility of an RCT.\n\n\nETHICS AND DISSEMINATION\nThe protocol is approved by the National Research Ethics Service Committee North West - Greater Manchester East and Health Research Authority (REC reference: 17/NW/0120). Recruitment into the study started in April 2017 and ended in July 2017. Follow-up of participants is ongoing and due to finish by the end of 2017. The findings of this study will be disseminated through peer-reviewed publications and international presentations. In addition, the protocol will be placed on the British Liver Trust website for public access.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02949505; Pre-results.",
"title": ""
},
{
"docid": "neg:1840413_6",
"text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.",
"title": ""
},
{
"docid": "neg:1840413_7",
"text": "In the next few decades, the proportion of Americans age 65 or older is expected to increase from 12% (36 million) to 20% (80 million) of the total US population [1]. As life expectancy increases, an even greater need arises for cost-effective interventions to improve function and quality of life among older adults [2-4]. All older adults face numerous health problems that can reduce or limit both the quality and quantity of life they will experience. Some of the main problems faced by older adults include reduced physical function and well-being, challenges with mental and emotional functioning and well-being, and more limited social functioning. Not surprisingly, these factors comprise the primary components of comprehensive health-related quality of life [5,6].",
"title": ""
},
{
"docid": "neg:1840413_8",
"text": "Software development has always inherently required multitasking: developers switch between coding, reviewing, testing, designing, and meeting with colleagues. The advent of software ecosystems like GitHub has enabled something new: the ability to easily switch between projects. Developers also have social incentives to contribute to many projects; prolific contributors gain social recognition and (eventually) economic rewards. Multitasking, however, comes at a cognitive cost: frequent context-switches can lead to distraction, sub-standard work, and even greater stress. In this paper, we gather ecosystem-level data on a group of programmers working on a large collection of projects. We develop models and methods for measuring the rate and breadth of a developers' context-switching behavior, and we study how context-switching affects their productivity. We also survey developers to understand the reasons for and perceptions of multitasking. We find that the most common reason for multitasking is interrelationships and dependencies between projects. Notably, we find that the rate of switching and breadth (number of projects) of a developer's work matter. Developers who work on many projects have higher productivity if they focus on few projects per day. Developers that switch projects too much during the course of a day have lower productivity as they work on more projects overall. Despite these findings, developers perceptions of the benefits of multitasking are varied.",
"title": ""
},
{
"docid": "neg:1840413_9",
"text": "This paper deals with the use of Petri nets in modelling railway network and designing appropriate control logic for it to avoid collision. Here, the whole railway network is presented as a combination of the elementary models – tracks, stations and points (switch) within the station including sensors and semaphores. We use generalized mutual exclusion constraints and constraints containing the firing vector to ensure safeness of the railway network. In this research work, we have actually introduced constraints at the points within the station. These constraints ensure that when a track is occupied, we control the switch so that another train will not enter into the same track and thus avoid collision.",
"title": ""
},
{
"docid": "neg:1840413_10",
"text": "Multiple Classifier Systems (MCS) have been widely studied as an alternative for increasing accuracy in pattern recognition. One of the most promising MCS approaches is Dynamic Selection (DS), in which the base classifiers are selected on the fly, according to each new sample to be classified. This paper provides a review of the DS techniques proposed in the literature from a theoretical and empirical point of view. We propose an updated taxonomy based on the main characteristics found in a dynamic selection system: (1) The methodology used to define a local region for the estimation of the local competence of the base classifiers; (2) The source of information used to estimate the level of competence of the base classifiers, such as local accuracy, oracle, ranking and probabilistic models, and (3) The selection approach, which determines whether a single or an ensemble of classifiers is selected. We categorize the main dynamic selection techniques in the DS literature based on the proposed taxonomy. We also conduct an extensive experimental analysis, considering a total of 18 state-of-the-art dynamic selection techniques, as well as static ensemble combination and single classification models. To date, this is the first analysis comparing all the key DS techniques under the same experimental protocol. Furthermore, we also present several perspectives and open research questions that can be used as a guide for future works in this domain. © 2017 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840413_11",
"text": "Traumatic dislocation of the testis is a rare event in which the full extent of the dislocation is present immediately following the initial trauma. We present a case in which the testicular dislocation progressed over a period of four days following the initial scrotal trauma.",
"title": ""
},
{
"docid": "neg:1840413_12",
"text": "Portable automatic seizure detection system is very convenient for epilepsy patients to carry. In order to make the system on-chip trainable with high efficiency and attain high detection accuracy, this paper presents a very large scale integration (VLSI) design based on the nonlinear support vector machine (SVM). The proposed design mainly consists of a feature extraction (FE) module and an SVM module. The FE module performs the three-level Daubechies discrete wavelet transform to fit the physiological bands of the electroencephalogram (EEG) signal and extracts the time–frequency domain features reflecting the nonstationary signal properties. The SVM module integrates the modified sequential minimal optimization algorithm with the table-driven-based Gaussian kernel to enable efficient on-chip learning. The presented design is verified on an Altera Cyclone II field-programmable gate array and tested using the two publicly available EEG datasets. Experiment results show that the designed VLSI system improves the detection accuracy and training efficiency.",
"title": ""
},
{
"docid": "neg:1840413_13",
"text": "To be successful in financial market trading it is necessary to correctly predict future market trends. Most professional traders use technical analysis to forecast future market prices. In this paper, we present a new hybrid intelligent method to forecast financial time series, especially for the Foreign Exchange Market (FX). To emulate the way real traders make predictions, this method uses both historical market data and chart patterns to forecast market trends. First, wavelet full decomposition of time series analysis was used as an Adaptive Network-based Fuzzy Inference System (ANFIS) input data for forecasting future market prices. Also, Quantum-behaved Particle Swarm Optimization (QPSO) for tuning the ANFIS membership functions has been used. The second part of this paper proposes a novel hybrid Dynamic Time Warping (DTW)-Wavelet Transform (WT) method for automatic pattern extraction. The results indicate that the presented hybrid method is a very useful and effective one for financial price forecasting and financial pattern extraction. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840413_14",
"text": "Rob Antrobus Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk r.antrobus1@lancaster.ac.uk Sylvain Frey Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk s.frey@lancaster.ac.uk Benjamin Green Security Lancaster Research Centre Lancaster University Lancaster LA1 4WA UK security-centre.lancs.ac.uk b.green2@lancaster.ac.uk",
"title": ""
},
{
"docid": "neg:1840413_15",
"text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.",
"title": ""
},
{
"docid": "neg:1840413_16",
"text": "Title Type cities and complexity understanding cities with cellular automata agent-based models and fractals PDF the complexity of cooperation agent-based models of competition and collaboration PDF party competition an agent-based model princeton studies in complexity PDF sharing cities a case for truly smart and sustainable cities urban and industrial environments PDF global metropolitan globalizing cities in a capitalist world questioning cities PDF state of the worlds cities 201011 cities for all bridging the urban divide PDF new testament cities in western asia minor light from archaeology on cities of paul and the seven churches of revelation PDF",
"title": ""
},
{
"docid": "neg:1840413_17",
"text": "The main receptors for amyloid-beta peptide (Abeta) transport across the blood-brain barrier (BBB) from brain to blood and blood to brain are low-density lipoprotein receptor related protein-1 (LRP1) and receptor for advanced glycation end products (RAGE), respectively. In normal human plasma a soluble form of LRP1 (sLRP1) is a major endogenous brain Abeta 'sinker' that sequesters some 70 to 90 % of plasma Abeta peptides. In Alzheimer's disease (AD), the levels of sLRP1 and its capacity to bind Abeta are reduced which increases free Abeta fraction in plasma. This in turn may increase brain Abeta burden through decreased Abeta efflux and/or increased Abeta influx across the BBB. In Abeta immunotherapy, anti-Abeta antibody sequestration of plasma Abeta enhances the peripheral Abeta 'sink action'. However, in contrast to endogenous sLRP1 which does not penetrate the BBB, some anti-Abeta antibodies may slowly enter the brain which reduces the effectiveness of their sink action and may contribute to neuroinflammation and intracerebral hemorrhage. Anti-Abeta antibody/Abeta immune complexes are rapidly cleared from brain to blood via FcRn (neonatal Fc receptor) across the BBB. In a mouse model of AD, restoring plasma sLRP1 with recombinant LRP-IV cluster reduces brain Abeta burden and improves functional changes in cerebral blood flow (CBF) and behavioral responses, without causing neuroinflammation and/or hemorrhage. The C-terminal sequence of Abeta is required for its direct interaction with sLRP and LRP-IV cluster which is completely blocked by the receptor-associated protein (RAP) that does not directly bind Abeta. Therapies to increase LRP1 expression or reduce RAGE activity at the BBB and/or restore the peripheral Abeta 'sink' action, hold potential to reduce brain Abeta and inflammation, and improve CBF and functional recovery in AD models, and by extension in AD patients.",
"title": ""
},
{
"docid": "neg:1840413_18",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] |
1840414 | Learning Sense-specific Word Embeddings By Exploiting Bilingual Resources | [
{
"docid": "pos:1840414_0",
"text": "Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.",
"title": ""
}
] | [
{
"docid": "neg:1840414_0",
"text": "Context-based pairing solutions increase the usability of IoT device pairing by eliminating any human involvement in the pairing process. This is possible by utilizing on-board sensors (with same sensing modalities) to capture a common physical context (e.g., ambient sound via each device's microphone). However, in a smart home scenario, it is impractical to assume that all devices will share a common sensing modality. For example, a motion detector is only equipped with an infrared sensor while Amazon Echo only has microphones. In this paper, we develop a new context-based pairing mechanism called Perceptio that uses time as the common factor across differing sensor types. By focusing on the event timing, rather than the specific event sensor data, Perceptio creates event fingerprints that can be matched across a variety of IoT devices. We propose Perceptio based on the idea that devices co-located within a physically secure boundary (e.g., single family house) can observe more events in common over time, as opposed to devices outside. Devices make use of the observed contextual information to provide entropy for Perceptio's pairing protocol. We design and implement Perceptio, and evaluate its effectiveness as an autonomous secure pairing solution. Our implementation demonstrates the ability to sufficiently distinguish between legitimate devices (placed within the boundary) and attacker devices (placed outside) by imposing a threshold on fingerprint similarity. Perceptio demonstrates an average fingerprint similarity of 94.9% between legitimate devices while even a hypothetical impossibly well-performing attacker yields only 68.9% between itself and a valid device.",
"title": ""
},
{
"docid": "neg:1840414_1",
"text": "With advances in brain-computer interface (BCI) research, a portable few- or single-channel BCI system has become necessary. Most recent BCI studies have demonstrated that the common spatial pattern (CSP) algorithm is a powerful tool in extracting features for multiple-class motor imagery. However, since the CSP algorithm requires multi-channel information, it is not suitable for a few- or single-channel system. In this study, we applied a short-time Fourier transform to decompose a single-channel electroencephalography signal into the time-frequency domain and construct multi-channel information. Using the reconstructed data, the CSP was combined with a support vector machine to obtain high classification accuracies from channels of both the sensorimotor and forehead areas. These results suggest that motor imagery can be detected with a single channel not only from the traditional sensorimotor area but also from the forehead area.",
"title": ""
},
{
"docid": "neg:1840414_2",
"text": "The Information Artifact Ontology (IAO) was created to serve as a domain‐neutral resource for the representation of types of information content entities (ICEs) such as documents, data‐bases, and digital im‐ ages. We identify a series of problems with the current version of the IAO and suggest solutions designed to advance our understanding of the relations between ICEs and associated cognitive representations in the minds of human subjects. This requires embedding IAO in a larger framework of ontologies, including most importantly the Mental Func‐ tioning Ontology (MFO). It also requires a careful treatment of the aboutness relations between ICEs and associated cognitive representa‐ tions and their targets in reality.",
"title": ""
},
{
"docid": "neg:1840414_3",
"text": "A Nyquist ADC with time-based pipelined architecture is proposed. The proposed hybrid pipeline stage, incorporating time-domain amplification based on a charge pump, enables power efficient analog to digital conversion. The proposed ADC also adopts a minimalist switched amplifier with 24dB open-loop dc gain in the first stage MDAC that is based on a new V-T operation, instead of a conventional high gain amplifier. The measured results of the prototype ADC implemented in a 0.13μm CMOS demonstrate peak SNDR of 69.3dB at 6.38mW power, with a near rail-to-rail 1MHz input of 2.4VP-P at 70MHz sampling frequency and 1.3V supply. This results in 38.2fJ/conversion-step FOM.",
"title": ""
},
{
"docid": "neg:1840414_4",
"text": "Identifying peer-review helpfulness is an important task for improving the quality of feedback that students receive from their peers. As a first step towards enhancing existing peerreview systems with new functionality based on helpfulness detection, we examine whether standard product review analysis techniques also apply to our new context of peer reviews. In addition, we investigate the utility of incorporating additional specialized features tailored to peer review. Our preliminary results show that the structural features, review unigrams and meta-data combined are useful in modeling the helpfulness of both peer reviews and product reviews, while peer-review specific auxiliary features can further improve helpfulness prediction.",
"title": ""
},
{
"docid": "neg:1840414_5",
"text": "In this paper, a complimentary split ring resonator (CSRR) enhanced wideband log-periodic antenna with coupled microstrip line feeding is presented. Here in this work, coupled line feeding to the patches is proposed to avoid individual microstrip feed matching complexities. Three CSRR elements were etched in the ground plane. Individual patches were designed according to the conventional log-periodic design rules. FR4 dielectric substrate is used to design a five-element log-periodic patch with CSRR printed on the ground plane. The result shows a wide operating band ranging from 4.5 GHz to 9 GHz. Surface current distribution of the antenna shows a strong resonance of CSRR's placed in the ground plane. The design approach of the antenna is reported and performance of the proposed antenna has been evaluated through three dimensional electromagnetic simulation validating performance enhancement of the antenna due to presence of CSRRs. Antennas designed in this work may be used in satellite and indoor wireless communication.",
"title": ""
},
{
"docid": "neg:1840414_6",
"text": "Recovering a low-rank tensor from incomplete information is a recurring problem in signal processing and machine learning. The most popular convex relaxation of this problem minimizes the sum of the nuclear norms of the unfoldings of the tensor. We show that this approach can be substantially suboptimal: reliably recovering a K-way tensor of length n and Tucker rank r from Gaussian measurements requires Ω(rnK−1) observations. In contrast, a certain (intractable) nonconvex formulation needs only O(r +nrK) observations. We introduce a very simple, new convex relaxation, which partially bridges this gap. Our new formulation succeeds with O(rbK/2cndK/2e) observations. While these results pertain to Gaussian measurements, simulations strongly suggest that the new norm also outperforms the sum of nuclear norms for tensor completion from a random subset of entries. Our lower bound for the sum-of-nuclear-norms model follows from a new result on recovering signals with multiple sparse structures (e.g. sparse, low rank), which perhaps surprisingly demonstrates the significant suboptimality of the commonly used recovery approach via minimizing the sum of individual sparsity inducing norms (e.g. l1, nuclear norm). Our new formulation for low-rank tensor recovery however opens the possibility in reducing the sample complexity by exploiting several structures jointly.",
"title": ""
},
{
"docid": "neg:1840414_7",
"text": "We study deep learning approaches to inferring numerical coordinates for points of interest in an input image. Existing convolutional neural network-based solutions to this problem either take a heatmap matching approach or regress to coordinates with a fully connected output layer. Neither of these approaches is ideal, since the former is not entirely differentiable, and the latter lacks inherent spatial generalization. We propose our differentiable spatial to numerical transform (DSNT) to fill this gap. The DSNT layer adds no trainable parameters, is fully differentiable, and exhibits good spatial generalization. Unlike heatmap matching, DSNT works well with low heatmap resolutions, so it can be dropped in as an output layer for a wide range of existing fully convolutional architectures. Consequently, DSNT offers a better trade-off between inference speed and prediction accuracy compared to existing techniques. When used to replace the popular heatmap matching approach used in almost all state-of-the-art methods for pose estimation, DSNT gives better prediction accuracy for all model architectures tested.",
"title": ""
},
{
"docid": "neg:1840414_8",
"text": "We introduce and create a framework for deriving probabilistic models of Information Retrieval. The models are nonparametric models of IR obtained in the language model approach. We derive term-weighting models by measuring the divergence of the actual term distribution from that obtained under a random process. Among the random processes we study the binomial distribution and Bose--Einstein statistics. We define two types of term frequency normalization for tuning term weights in the document--query matching process. The first normalization assumes that documents have the same length and measures the information gain with the observed term once it has been accepted as a good descriptor of the observed document. The second normalization is related to the document length and to other statistics. These two normalization methods are applied to the basic models in succession to obtain weighting formulae. Results show that our framework produces different nonparametric models forming baseline alternatives to the standard tf-idf model.",
"title": ""
},
{
"docid": "neg:1840414_9",
"text": "BACKGROUND\nMore than one in five patients who undergo treatment for breast cancer will develop breast cancer-related lymphedema (BCRL). BCRL can occur as a result of breast cancer surgery and/or radiation therapy. BCRL can negatively impact comfort, function, and quality of life (QoL). Manual lymphatic drainage (MLD), a type of hands-on therapy, is frequently used for BCRL and often as part of complex decongestive therapy (CDT). CDT is a fourfold conservative treatment which includes MLD, compression therapy (consisting of compression bandages, compression sleeves, or other types of compression garments), skin care, and lymph-reducing exercises (LREs). Phase 1 of CDT is to reduce swelling; Phase 2 is to maintain the reduced swelling.\n\n\nOBJECTIVES\nTo assess the efficacy and safety of MLD in treating BCRL.\n\n\nSEARCH METHODS\nWe searched Medline, EMBASE, CENTRAL, WHO ICTRP (World Health Organization's International Clinical Trial Registry Platform), and Cochrane Breast Cancer Group's Specialised Register from root to 24 May 2013. No language restrictions were applied.\n\n\nSELECTION CRITERIA\nWe included randomized controlled trials (RCTs) or quasi-RCTs of women with BCRL. The intervention was MLD. The primary outcomes were (1) volumetric changes, (2) adverse events. Secondary outcomes were (1) function, (2) subjective sensations, (3) QoL, (4) cost of care.\n\n\nDATA COLLECTION AND ANALYSIS\nWe collected data on three volumetric outcomes. (1) LE (lymphedema) volume was defined as the amount of excess fluid left in the arm after treatment, calculated as volume in mL of affected arm post-treatment minus unaffected arm post-treatment. (2) Volume reduction was defined as the amount of fluid reduction in mL from before to after treatment calculated as the pretreatment LE volume of the affected arm minus the post-treatment LE volume of the affected arm. (3) Per cent reduction was defined as the proportion of fluid reduced relative to the baseline excess volume, calculated as volume reduction divided by baseline LE volume multiplied by 100. We entered trial data into Review Manger 5.2 (RevMan), pooled data using a fixed-effect model, and analyzed continuous data as mean differences (MDs) with 95% confidence intervals (CIs). We also explored subgroups to determine whether mild BCRL compared to moderate or severe BCRL, and BCRL less than a year compared to more than a year was associated with a better response to MLD.\n\n\nMAIN RESULTS\nSix trials were included. Based on similar designs, trials clustered in three categories.(1) MLD + standard physiotherapy versus standard physiotherapy (one trial) showed significant improvements in both groups from baseline but no significant between-groups differences for per cent reduction.(2) MLD + compression bandaging versus compression bandaging (two trials) showed significant per cent reductions of 30% to 38.6% for compression bandaging alone, and an additional 7.11% reduction for MLD (MD 7.11%, 95% CI 1.75% to 12.47%; two RCTs; 83 participants). Volume reduction was borderline significant (P = 0.06). LE volume was not significant. Subgroup analyses was significant showing that participants with mild-to-moderate BCRL were better responders to MLD than were moderate-to-severe participants.(3) MLD + compression therapy versus nonMLD treatment + compression therapy (three trials) were too varied to pool. One of the trials compared compression sleeve plus MLD to compression sleeve plus pneumatic pump. Volume reduction was statistically significant favoring MLD (MD 47.00 mL, 95% CI 15.25 mL to 78.75 mL; 1 RCT; 24 participants), per cent reduction was borderline significant (P=0.07), and LE volume was not significant. A second trial compared compression sleeve plus MLD to compression sleeve plus self-administered simple lymphatic drainage (SLD), and was significant for MLD for LE volume (MD -230.00 mL, 95% CI -450.84 mL to -9.16 mL; 1 RCT; 31 participants) but not for volume reduction or per cent reduction. A third trial of MLD + compression bandaging versus SLD + compression bandaging was not significant (P = 0.10) for per cent reduction, the only outcome measured (MD 11.80%, 95% CI -2.47% to 26.07%, 28 participants).MLD was well tolerated and safe in all trials.Two trials measured function as range of motion with conflicting results. One trial reported significant within-groups gains for both groups, but no between-groups differences. The other trial reported there were no significant within-groups gains and did not report between-groups results. One trial measured strength and reported no significant changes in either group.Two trials measured QoL, but results were not usable because one trial did not report any results, and the other trial did not report between-groups results.Four trials measured sensations such as pain and heaviness. Overall, the sensations were significantly reduced in both groups over baseline, but with no between-groups differences. No trials reported cost of care.Trials were small ranging from 24 to 45 participants. Most trials appeared to randomize participants adequately. However, in four trials the person measuring the swelling knew what treatment the participants were receiving, and this could have biased results.\n\n\nAUTHORS' CONCLUSIONS\nMLD is safe and may offer additional benefit to compression bandaging for swelling reduction. Compared to individuals with moderate-to-severe BCRL, those with mild-to-moderate BCRL may be the ones who benefit from adding MLD to an intensive course of treatment with compression bandaging. This finding, however, needs to be confirmed by randomized data.In trials where MLD and sleeve were compared with a nonMLD treatment and sleeve, volumetric outcomes were inconsistent within the same trial. Research is needed to identify the most clinically meaningful volumetric measurement, to incorporate newer technologies in LE assessment, and to assess other clinically relevant outcomes such as fibrotic tissue formation.Findings were contradictory for function (range of motion), and inconclusive for quality of life.For symptoms such as pain and heaviness, 60% to 80% of participants reported feeling better regardless of which treatment they received.One-year follow-up suggests that once swelling had been reduced, participants were likely to keep their swelling down if they continued to use a custom-made sleeve.",
"title": ""
},
{
"docid": "neg:1840414_10",
"text": "The growing amounts of textual data require automatic methods for structuring relevant information so that it can be further processed by computers and systematically accessed by humans. The scenario dealt with in this dissertation is known as Knowledge Base Population (KBP), where relational information about entities is retrieved from a large text collection and stored in a database, structured according to a prespecified schema. Most of the research in this dissertation is placed in the context of the KBP benchmark of the Text Analysis Conference (TAC KBP), which provides a test-bed to examine all steps in a complex end-to-end relation extraction setting. In this dissertation a new state of the art for the TAC KBP benchmark was achieved by focussing on the following research problems: (1) The KBP task was broken down into a modular pipeline of sub-problems, and the most pressing issues were identified and quantified at all steps. (2) The quality of semi-automatically generated training data was increased by developing noise-reduction methods, decreasing the influence of false-positive training examples. (3) A focus was laid on fine-grained entity type modelling, entity expansion, entity matching and tagging, to maintain as much recall as possible on the relational argument level. (4) A new set of effective methods for generating training data, encoding features and training relational classifiers was developed and compared with previous state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840414_11",
"text": "The genesis of the internet and web has created huge information on the web, including users’ digital or textual opinions and reviews. This leads to compiling many features in document-level. Consequently, we will have a high-dimensional feature space. In this paper, we propose an algorithm based on standard deviation method to solve the high-dimensional feature space. The algorithm constructs feature subsets based on dispersion of features. In other words, algorithm selects the features with higher value of standard deviation for construction of the subsets. To do this, the paper presents an experiment of performance estimation on sentiment analysis dataset using ensemble of classifiers when dimensionality reduction is performed on the input space using three different methods. Also different types of base classifiers and classifier combination rules were used.",
"title": ""
},
{
"docid": "neg:1840414_12",
"text": "The authors have been developing humanoid robots in order to develop new mechanisms and functions for a humanoid robot that has the ability to communicate naturally with a human by expressing human-like emotion. In 2004, we developed the emotion expression humanoid robot WE-4RII (Waseda Eye No.4 Refined II) by integrating the new humanoid robot hands RCH-I (RoboCasa Hand No.1) into the emotion expression humanoid robot WE-4R. We confirmed that WE-4RII can effectively express its emotion.",
"title": ""
},
{
"docid": "neg:1840414_13",
"text": "Online social media have democratized the broadcasting of information, encouraging users to view the world through the lens of social networks. The exploitation of this lens, termed social sensing, presents challenges for researchers at the intersection of computer science and the social sciences.",
"title": ""
},
{
"docid": "neg:1840414_14",
"text": "CONTEXT\nEvidence suggests that early adverse experiences play a preeminent role in development of mood and anxiety disorders and that corticotropin-releasing factor (CRF) systems may mediate this association.\n\n\nOBJECTIVE\nTo determine whether early-life stress results in a persistent sensitization of the hypothalamic-pituitary-adrenal axis to mild stress in adulthood, thereby contributing to vulnerability to psychopathological conditions.\n\n\nDESIGN AND SETTING\nProspective controlled study conducted from May 1997 to July 1999 at the General Clinical Research Center of Emory University Hospital, Atlanta, Ga.\n\n\nPARTICIPANTS\nForty-nine healthy women aged 18 to 45 years with regular menses, with no history of mania or psychosis, with no active substance abuse or eating disorder within 6 months, and who were free of hormonal and psychotropic medications were recruited into 4 study groups (n = 12 with no history of childhood abuse or psychiatric disorder [controls]; n = 13 with diagnosis of current major depression who were sexually or physically abused as children; n = 14 without current major depression who were sexually or physically abused as children; and n = 10 with diagnosis of current major depression and no history of childhood abuse).\n\n\nMAIN OUTCOME MEASURES\nAdrenocorticotropic hormone (ACTH) and cortisol levels and heart rate responses to a standardized psychosocial laboratory stressor compared among the 4 study groups.\n\n\nRESULTS\nWomen with a history of childhood abuse exhibited increased pituitary-adrenal and autonomic responses to stress compared with controls. This effect was particularly robust in women with current symptoms of depression and anxiety. Women with a history of childhood abuse and a current major depression diagnosis exhibited a more than 6-fold greater ACTH response to stress than age-matched controls (net peak of 9.0 pmol/L [41.0 pg/mL]; 95% confidence interval [CI], 4.7-13.3 pmol/L [21.6-60. 4 pg/mL]; vs net peak of 1.4 pmol/L [6.19 pg/mL]; 95% CI, 0.2-2.5 pmol/L [1.0-11.4 pg/mL]; difference, 8.6 pmol/L [38.9 pg/mL]; 95% CI, 4.6-12.6 pmol/L [20.8-57.1 pg/mL]; P<.001).\n\n\nCONCLUSIONS\nOur findings suggest that hypothalamic-pituitary-adrenal axis and autonomic nervous system hyperreactivity, presumably due to CRF hypersecretion, is a persistent consequence of childhood abuse that may contribute to the diathesis for adulthood psychopathological conditions. Furthermore, these results imply a role for CRF receptor antagonists in the prevention and treatment of psychopathological conditions related to early-life stress. JAMA. 2000;284:592-597",
"title": ""
},
{
"docid": "neg:1840414_15",
"text": "Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transformations, a set of powerful, stably invertible, and learnable transformations, resulting in an unsupervised learning algorithm with exact log-likelihood computation, exact and efficient sampling, exact and efficient inference of latent variables, and an interpretable latent space. We demonstrate its ability to model natural images on four datasets through sampling, log-likelihood evaluation, and latent variable manipulations.",
"title": ""
},
{
"docid": "neg:1840414_16",
"text": "Hadapt is a start-up company currently commercializing the Yale University research project called HadoopDB. The company focuses on building a platform for Big Data analytics in the cloud by introducing a storage layer optimized for structured data and by providing a framework for executing SQL queries efficiently. This work considers processing data warehousing queries over very large datasets. Our goal is to maximize perfor mance while, at the same time, not giving up fault tolerance and scalability. We analyze the complexity of this problem in the split execution environment of HadoopDB. Here, incoming queries are examined; parts of the query are pushed down and executed inside the higher performing database layer; and the rest of the query is processed in a more generic MapReduce framework.\n In this paper, we discuss in detail performance-oriented query execution strategies for data warehouse queries in split execution environments, with particular focus on join and aggregation operations. The efficiency of our techniques is demonstrated by running experiments using the TPC-H benchmark with 3TB of data. In these experiments we compare our results with a standard commercial parallel database and an open-source MapReduce implementation featuring a SQL interface (Hive). We show that HadoopDB successfully competes with other systems.",
"title": ""
},
{
"docid": "neg:1840414_17",
"text": "In medical imaging, Computer Aided Diagnosis (CAD) is a rapidly growing dynamic area of research. In recent years, significant attempts are made for the enhancement of computer aided diagnosis applications because errors in medical diagnostic systems can result in seriously misleading medical treatments. Machine learning is important in Computer Aided Diagnosis. After using an easy equation, objects such as organs may not be indicated accurately. So, pattern recognition fundamentally involves learning from examples. In the field of bio-medical, pattern recognition and machine learning promise the improved accuracy of perception and diagnosis of disease. They also promote the objectivity of decision-making process. For the analysis of high-dimensional and multimodal bio-medical data, machine learning offers a worthy approach for making classy and automatic algorithms. This survey paper provides the comparative analysis of different machine learning algorithms for diagnosis of different diseases such as heart disease, diabetes disease, liver disease, dengue disease and hepatitis disease. It brings attention towards the suite of machine learning algorithms and tools that are used for the analysis of diseases and decision-making process accordingly.",
"title": ""
},
{
"docid": "neg:1840414_18",
"text": "A tensegrity is finite configuration of points in Ed suspended rigidly by inextendable cables and incompressable struts. Here it is explained how a stress-energy function, given by a symmetric stress matrix, can be used to create tensegrities that are globally rigid in the sense that the only configurations that satisfy the cable and strut constraints are congruent copies.",
"title": ""
},
{
"docid": "neg:1840414_19",
"text": "Online shopping, different from traditional shopping behavior, is characterized with uncertainty, anonymity, and lack of control and potential opportunism. Therefore, trust is an important factor to facilitate online transactions. The purpose of this study is to explore the role of trust in consumer online purchase behavior. This study undertook a comprehensive survey of online customers having e-shopping experiences in Taiwan and we received 1258 valid questionnaires. The empirical results, using structural equation modeling, indicated that perceived ease of use and perceived usefulness affect have a significant impact on trust in e-commerce. Trust also has a significant influence on attitude towards online purchase. However, there is no significant impact from trust on the intention of online purchase.",
"title": ""
}
] |
1840415 | Group-based multi-trajectory modeling. | [
{
"docid": "pos:1840415_0",
"text": "Recent technological advances have expanded the breadth of available omic data, from whole-genome sequencing data, to extensive transcriptomic, methylomic and metabolomic data. A key goal of analyses of these data is the identification of effective models that predict phenotypic traits and outcomes, elucidating important biomarkers and generating important insights into the genetic underpinnings of the heritability of complex traits. There is still a need for powerful and advanced analysis strategies to fully harness the utility of these comprehensive high-throughput data, identifying true associations and reducing the number of false associations. In this Review, we explore the emerging approaches for data integration — including meta-dimensional and multi-staged analyses — which aim to deepen our understanding of the role of genetics and genomics in complex outcomes. With the use and further development of these approaches, an improved understanding of the relationship between genomic variation and human phenotypes may be revealed.",
"title": ""
}
] | [
{
"docid": "neg:1840415_0",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "neg:1840415_1",
"text": "Wilkinson Power Dividers/Combiners The in-phase power combiners and dividers are important components of the RF and microwave transmitters when it is necessary to deliver a high level of the output power to antenna, especially in phased-array systems. In this case, it is also required to provide a high degree of isolation between output ports over some frequency range for identical in-phase signals with equal amplitudes. Figure 19(a) shows a planar structure of the basic parallel beam N-way divider/combiner, which provides a combination of powers from the N signal sources. Here, the input impedance of the N transmission lines (connected in parallel) with the characteristic impedance of Z0 each is equal to Z0/N. Consequently, an additional quarterwave transmission line with the characteristic impedance",
"title": ""
},
{
"docid": "neg:1840415_2",
"text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.",
"title": ""
},
{
"docid": "neg:1840415_3",
"text": "Energy restriction induces physiological effects that hinder further weight loss. Thus, deliberate periods of energy balance during weight loss interventions may attenuate these adaptive responses to energy restriction and thereby increase the efficiency of weight loss (i.e. the amount of weight or fat lost per unit of energy deficit). To address this possibility, we systematically searched MEDLINE, PreMEDLINE, PubMed and Cinahl and reviewed adaptive responses to energy restriction in 40 publications involving humans of any age or body mass index that had undergone a diet involving intermittent energy restriction, 12 with direct comparison to continuous energy restriction. Included publications needed to measure one or more of body weight, body mass index, or body composition before and at the end of energy restriction. 31 of the 40 publications involved 'intermittent fasting' of 1-7-day periods of severe energy restriction. While intermittent fasting appears to produce similar effects to continuous energy restriction to reduce body weight, fat mass, fat-free mass and improve glucose homeostasis, and may reduce appetite, it does not appear to attenuate other adaptive responses to energy restriction or improve weight loss efficiency, albeit most of the reviewed publications were not powered to assess these outcomes. Intermittent fasting thus represents a valid--albeit apparently not superior--option to continuous energy restriction for weight loss.",
"title": ""
},
{
"docid": "neg:1840415_4",
"text": "Physical Unclonable Functions (PUFs) are cryptographic primitives that can be used to generate volatile secret keys for cryptographic operations and enable low-cost authentication of integrated circuits. Existing PUF designs mainly exploit variation effects on silicon and hence are not readily applicable for the authentication of printed circuit boards (PCBs). To tackle the above problem, in this paper, we propose a novel PUF device that is able to generate unique and stable IDs for individual PCB, namely BoardPUF. To be specific, we embed a number of capacitors in the internal layer of PCBs and utilize their variations for key generation. Then, by integrating a cryptographic primitive (e.g. hash function) into BoardPUF, we can effectively perform PCB authentication in a challenge-response manner. Our experimental results on fabricated boards demonstrate the efficacy of BoardPUF.",
"title": ""
},
{
"docid": "neg:1840415_5",
"text": "Intrusion Detection System (IDS) have become increasingly popular over the past years as an important network security technology to detect cyber attacks in a wide variety of network communication. IDS monitors' network or host system activities by collecting network information, and analyze this information for malicious activities. Cloud computing, with the concept of Software as a Service (SaaS) presents an exciting benefit when it enables providers to rent their services to users in perform complex tasks over the Internet. In addition, Cloud based services reduce a cost in investing new infrastructure, training new personnel, or licensing new software. In this paper, we introduce a novel framework based on Cloud computing called Cloud-based Intrusion Detection Service (CBIDS). This model enables the identification of malicious activities from different points of network and overcome the deficiency of classical intrusion detection. CBIDS can be implemented to detect variety of attacks in private and public Clouds.",
"title": ""
},
{
"docid": "neg:1840415_6",
"text": "The sheer volume of multimedia contents generated by today's Internet services are stored in the cloud. The traditional indexing method associating the user-generated metadata with the content is vulnerable to the inaccuracy caused by the low quality of the metadata. While the content-based indexing does not depend on the error-prone metadata. However, the state-of-the-art research focuses on developing descriptive features and miss the system-oriented considerations when incorporating these features into the practical cloud computing systems. We propose an Update-Efficient and Parallel-Friendly content-based multimedia indexing system, called Partitioned Hash Forest (PHF). The PHF system incorporates the state-of-the-art content-based indexing models and multiple system-oriented optimizations. PHF contains an approximate content-based index and leverages the hierarchical memory system to support the high volume of updates. Additionally, the content-aware data partitioning and lock-free concurrency management module enable the parallel processing of the concurrent user requests. We evaluate PHF in terms of indexing accuracy and system efficiency by comparing it with the state-of-the-art content-based indexing algorithm and its variances. We achieve the significantly better accuracy with less resource consumption, around 37% faster in update processing and up to 2.5X throughput speedup in a multi-core platform comparing to other parallel-friendly designs.",
"title": ""
},
{
"docid": "neg:1840415_7",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "neg:1840415_8",
"text": "This paper investigates the problem of robust H∞ output-feedback control for a class of nonlinear systems under unreliable communication links. The nonlinear plant is represented by a Takagi-Sugeno (T-S) uncertain fuzzy model, and the communication links between the plant and controller are assumed to be imperfect, i.e., data-packet dropouts occur intermittently, which is often the case in a network environment. Stochastic variables that satisfy the Bernoulli random-binary distribution are adopted to characterize the data-missing phenomenon, and the attention is focused on the design of a piecewise static-output-feedback (SOF) controller such that the closed-loop system is stochastically stable with a guaranteed H∞ performance. Based on a piecewise Lyapunov function combined with some novel convexifying techniques, the solutions to the problem are formulated in the form of linear matrix inequalities (LMIs). Finally, simulation examples are also provided to illustrate the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "neg:1840415_9",
"text": "This paper presents a Retrospective Event Detection algorithm, called Eventy-Topic Detection (ETD), which automatically generates topics that describe events in a large, temporal text corpus. Our approach leverages the structure of the topic modeling framework, specifically the Latent Dirichlet Allocation (LDA), to generate topics which are then later labeled as Eventy-Topics or non-Eventy-Topics. The system first runs daily LDA topic models, then calculates the cosine similarity between the topics of the daily topic models, and then runs our novel Bump-Detection algorithm. Similar topics labeled as an Eventy-Topic are then grouped together. The algorithm is demonstrated on two Terabyte sized corpuses a Reuters News corpus and a Twitter corpus. Our method is evaluated on a human annotated test set. Our algorithm demonstrates its ability to accurately describe and label events in a temporal text corpus.",
"title": ""
},
{
"docid": "neg:1840415_10",
"text": "There is a significant amount of controversy related to the optimal amount of dietary carbohydrate. This review summarizes the health-related positives and negatives associated with carbohydrate restriction. On the positive side, there is substantive evidence that for many individuals, low-carbohydrate, high-protein diets can effectively promote weight loss. Low-carbohydrate diets (LCDs) also can lead to favorable changes in blood lipids (i.e., decreased triacylglycerols, increased high-density lipoprotein cholesterol) and decrease the severity of hypertension. These positives should be balanced by consideration of the likelihood that LCDs often lead to decreased intakes of phytochemicals (which could increase predisposition to cardiovascular disease and cancer) and nondigestible carbohydrates (which could increase risk for disorders of the lower gastrointestinal tract). Diets restricted in carbohydrates also are likely to lead to decreased glycogen stores, which could compromise an individual's ability to maintain high levels of physical activity. LCDs that are high in saturated fat appear to raise low-density lipoprotein cholesterol and may exacerbate endothelial dysfunction. However, for the significant percentage of the population with insulin resistance or those classified as having metabolic syndrome or prediabetes, there is much experimental support for consumption of a moderately restricted carbohydrate diet (i.e., one providing approximately 26%-44 % of calories from carbohydrate) that emphasizes high-quality carbohydrate sources. This type of dietary pattern would likely lead to favorable changes in the aforementioned cardiovascular disease risk factors, while minimizing the potential negatives associated with consumption of the more restrictive LCDs.",
"title": ""
},
{
"docid": "neg:1840415_11",
"text": "In this paper, we introduce a generic inference hybrid framework for Convolutional Recurrent Neural Network (conv-RNN) of semantic modeling of text, seamless integrating the merits on extracting different aspects of linguistic information from both convolutional and recurrent neural network structures and thus strengthening the semantic understanding power of the new framework. Besides, based on conv-RNN, we also propose a novel sentence classification model and an attention based answer selection model with strengthening power for the sentence matching and classification respectively. We validate the proposed models on a very wide variety of data sets, including two challenging tasks of answer selection (AS) and five benchmark datasets for sentence classification (SC). To the best of our knowledge, it is by far the most complete comparison results in both AS and SC. We empirically show superior performances of conv-RNN in these different challenging tasks and benchmark datasets and also summarize insights on the performances of other state-of-the-arts methodologies.",
"title": ""
},
{
"docid": "neg:1840415_12",
"text": "One-hop broadcasting is the predominate form of network traffic in VANETs. Exchanging status information by broadcasting among the vehicles enhances vehicular active safety. Since there is no MAC layer broadcasting recovery for 802.11 based VANETs, efforts should be made towards more robust and effective transmission of such safety-related information. In this paper, a channel adaptive broadcasting method is proposed. It relies solely on channel condition information available at each vehicle by employing standard supported sequence number mechanisms. The proposed method is fully compatible with 802.11 and introduces no communication overhead. Simulation studies show that it outperforms standard broadcasting in term of reception rate and channel utilization.",
"title": ""
},
{
"docid": "neg:1840415_13",
"text": "In DSM-IV-TR, trichotillomania (TTM) is classified as an impulse control disorder (not classified elsewhere), skin picking lacks its own diagnostic category (but might be diagnosed as an impulse control disorder not otherwise specified), and stereotypic movement disorder is classified as a disorder usually first diagnosed in infancy, childhood, or adolescence. ICD-10 classifies TTM as a habit and impulse disorder, and includes stereotyped movement disorders in a section on other behavioral and emotional disorders with onset usually occurring in childhood and adolescence. This article provides a focused review of nosological issues relevant to DSM-V, given recent empirical findings. This review presents a number of options and preliminary recommendations to be considered for DSM-V: (1) Although TTM fits optimally into a category of body-focused repetitive behavioral disorders, in a nosology comprised of relatively few major categories it fits best within a category of motoric obsessive-compulsive spectrum disorders, (2) available evidence does not support continuing to include (current) diagnostic criteria B and C for TTM in DSM-V, (3) the text for TTM should be updated to describe subtypes and forms of hair pulling, (4) there are persuasive reasons for referring to TTM as \"hair pulling disorder (trichotillomania),\" (5) diagnostic criteria for skin picking disorder should be included in DSM-V or in DSM-Vs Appendix of Criteria Sets Provided for Further Study, and (6) the diagnostic criteria for stereotypic movement disorder should be clarified and simplified, bringing them in line with those for hair pulling and skin picking disorder.",
"title": ""
},
{
"docid": "neg:1840415_14",
"text": "Recently, a variety of bioactive protein drugs have been available in large quantities as a result of advances in biotechnology. Such availability has prompted development of long-term protein delivery systems. Biodegradable microparticulate systems have been used widely for controlled release of protein drugs for days and months. The most widely used biodegradable polymer has been poly(d,l-lactic-co-glycolic acid) (PLGA). Protein-containing microparticles are usually prepared by the water/oil/water (W/O/W) double emulsion method, and variations of this method, such as solid/oil/water (S/O/W) and water/oil/oil (W/O/O), have also been used. Other methods of preparation include spray drying, ultrasonic atomization, and electrospray methods. The important factors in developing biodegradable microparticles for protein drug delivery are protein release profile (including burst release, duration of release, and extent of release), microparticle size, protein loading, encapsulation efficiency, and bioactivity of the released protein. Many studies used albumin as a model protein, and thus, the bioactivity of the release protein has not been examined. Other studies which utilized enzymes, insulin, erythropoietin, and growth factors have suggested that the right formulation to preserve bioactivity of the loaded protein drug during the processing and storage steps is important. The protein release profiles from various microparticle formulations can be classified into four distinct categories (Types A, B, C, and D). The categories are based on the magnitude of burst release, the extent of protein release, and the protein release kinetics followed by the burst release. The protein loading (i.e., the total amount of protein loaded divided by the total weight of microparticles) in various microparticles is 6.7+/-4.6%, and it ranges from 0.5% to 20.0%. Development of clinically successful long-term protein delivery systems based on biodegradable microparticles requires improvement in the drug loading efficiency, control of the initial burst release, and the ability to control the protein release kinetics.",
"title": ""
},
{
"docid": "neg:1840415_15",
"text": "In this paper, we propose to use hardware performance counters (HPC) to detect malicious program modifications at load time (static) and at runtime (dynamic). HPC have been used for program characterization and testing, system testing and performance evaluation, and as side channels. We propose to use HPCs for static and dynamic integrity checking of programs.. The main advantage of HPC-based integrity checking is that it is almost free in terms of hardware cost; HPCs are built into almost all processors. The runtime performance overhead is minimal because we use the operating system for integrity checking, which is called anyway for process scheduling and other interrupts. Our preliminary results confirm that HPC very efficiently detect program modifications with very low cost.",
"title": ""
},
{
"docid": "neg:1840415_16",
"text": "In this paper, we describe SemEval-2013 Task 4: the definition, the data, the evaluation and the results. The task is to capture some of the meaning of English noun compounds via paraphrasing. Given a two-word noun compound, the participating system is asked to produce an explicitly ranked list of its free-form paraphrases. The list is automatically compared and evaluated against a similarly ranked list of paraphrases proposed by human annotators, recruited and managed through Amazon’s Mechanical Turk. The comparison of raw paraphrases is sensitive to syntactic and morphological variation. The “gold” ranking is based on the relative popularity of paraphrases among annotators. To make the ranking more reliable, highly similar paraphrases are grouped, so as to downplay superficial differences in syntax and morphology. Three systems participated in the task. They all beat a simple baseline on one of the two evaluation measures, but not on both measures. This shows that the task is difficult.",
"title": ""
},
{
"docid": "neg:1840415_17",
"text": "We present an iterative algorithm for calibrating vector network analyzers based on orthogonal distance regression. The algorithm features a robust, yet efficient, search algorithm, an error analysis that includes both random and systematic errors, a full covariance matrix relating calibration and measurement errors, 95% coverage factors, and an easy-to-use user interface that supports a wide variety of calibration standards. We also discuss evidence that the algorithm outperforms theMultiCal software package in the presence of measurement errors and accurately estimates the uncertainty of its results.",
"title": ""
}
] |
1840416 | gSpan: Graph-Based Substructure Pattern Mining | [
{
"docid": "pos:1840416_0",
"text": "Sequential pattern mining is an important data mining problem with broad applications. It is challenging since one may need to examine a combinatorially explosive number of possible subsequence patterns. Most of the previously developed sequential pattern mining methods follow the methodology of which may substantially reduce the number of combinations to be examined. However, still encounters problems when a sequence database is large and/or when sequential patterns to be mined are numerous and/or long. In this paper, we propose a novel sequential pattern mining method, called PrefixSpan (i.e., Prefix-projected Sequential pattern mining), which explores prefixprojection in sequential pattern mining. PrefixSpan mines the complete set of patterns but greatly reduces the efforts of candidate subsequence generation. Moreover, prefix-projection substantially reduces the size of projected databases and leads to efficient processing. Our performance study shows that PrefixSpan outperforms both the -based GSP algorithm and another recently proposed method, FreeSpan, in mining large sequence",
"title": ""
}
] | [
{
"docid": "neg:1840416_0",
"text": "The tilt coordination technique is used in driving simulation for reproducing a sustained linear horizontal acceleration by tilting the simulator cabin. If combined with the translation motion of the simulator, this technique increases the acceleration rendering capabilities of the whole system. To perform this technique correctly, the rotational motion must be slow to remain under the perception threshold and thus be unnoticed by the driver. However, the acceleration to render changes quickly. Between the slow rotational motion limited by the tilt threshold and the fast change of acceleration to render, the design of the coupling between motions of rotation and translation plays a critical role in the realism of a driving simulator. This study focuses on the acceptance by drivers of different configurations for tilt restitution in terms of maximum tilt angle, tilt rate, and tilt acceleration. Two experiments were conducted, focusing respectively on roll tilt for a 0.2 Hz slaloming task and on pitch tilt for an acceleration/deceleration task. The results show what thresholds have to be followed in terms of amplitude, rate, and acceleration. These results are far superior to the standard human perception thresholds found in the literature.",
"title": ""
},
{
"docid": "neg:1840416_1",
"text": "Full vinyl polysiloxane casts of the vagina were obtained from 23 Afro-American, 39 Caucasian and 15 Hispanic women in lying, sitting and standing positions. A new shape, the pumpkin seed, was found in 40% of Afro-American women, but not in Caucasians or Hispanics. Analyses of cast and introital measurements revealed: (1) posterior cast length is significantly longer, anterior cast length is significantly shorter and cast width is significantly larger in Hispanics than in the other two groups and (2) the Caucasian introitus is significantly greater than that of the Afro-American subject.",
"title": ""
},
{
"docid": "neg:1840416_2",
"text": "In recent years, Deep Learning has become the go-to solution for a broad range of applications, often outperforming state-of-the-art. However, it is important, for both theoreticians and practitioners, to gain a deeper understanding of the difficulties and limitations associated with common approaches and algorithms. We describe four types of simple problems, for which the gradientbased algorithms commonly used in deep learning either fail or suffer from significant difficulties. We illustrate the failures through practical experiments, and provide theoretical insights explaining their source, and how they might be remedied.",
"title": ""
},
{
"docid": "neg:1840416_3",
"text": "This paper provides insights of possible plagiarism detection approach based on modern technologies – programming assignment versioning, auto-testing and abstract syntax tree comparison to estimate code similarities. Keywords—automation; assignment; testing; continuous integration INTRODUCTION In the emerging world of information technologies, a growing number of students is choosing this specialization for their education. Therefore, the number of homework and laboratory research assignments that should be tested is also growing. The majority of these tasks is based on the necessity to implement some algorithm as a small program. This article discusses the possible solutions to the problem of automated testing of programming laboratory research assignments. The course “Algorithmization and Programming of Solutions” is offered to all the first-year students of The Faculty of Computer Science and Information Technology (~500 students) in Riga Technical University and it provides the students the basics of the algorithmization of computing processes and the technology of program design using Java programming language (the given course and the University will be considered as an example of the implementation of the automated testing). During the course eight laboratory research assignments are planned, where the student has to develop an algorithm, create a program and submit it to the education portal of the University. The VBA test program was designed as one of the solutions, the requirements for each laboratory assignment were determined and the special tests have been created. At some point, however, the VBA offered options were no longer able to meet the requirements, therefore the activities on identifying the requirements for the automation of the whole cycle of programming work reception, testing and evaluation have begun. I. PLAGIARISM DETECTION APPROACHES To identify possible plagiarism detection techniques, it is imperative to define scoring or detecting threshold. Surely it is not an easy task, since only identical works can be considered as “true” plagiarism. In all other cases a person must make his decision whether two pieces of code are identical by their means or not. However, it is possible to outline some widespread approaches of assessment comparison. A. Manual Work Comparison In this case, all works must be compared one-by-one. Surely, this approach will lead to progressively increasing error rate due to human memory and cognitive function limitations. Large student group homework assessment verification can take long time, which is another contributing factor to errorrate increase. B. Diff-tool Application It is possible to compare two code fragments using semiautomated diff tool which provides information about Levenshtein distance between fragments. Although several visualization tools exist, it is quite easy to fool algorithm to believe that a code has multiple different elements in it, but all of them are actually another name for variables/functions/etc. without any additional contribution. C. Abstract Syntax Tree (AST) comparison Abstract syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.syntax tree is a tree representation of the abstract syntactic structure of source code written in a programming language. Each node of the tree denotes a construct occurring in the source code. Example of AST is shown on Fig. 1.",
"title": ""
},
{
"docid": "neg:1840416_4",
"text": "In this paper, we introduce a novel reconfigurable architecture, named 3D field-programmable gate array (3D nFPGA), which utilizes 3D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3D stacking method enabled by the application of nanomaterials, 3D nFPGA obtains a 4x footprint reduction comparing to the traditional CMOS-based 2D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3D nFPGA is able to provide a performance gain of 2.6 x with a small power overhead comparing to the traditional 2D FPGA architecture.",
"title": ""
},
{
"docid": "neg:1840416_5",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "neg:1840416_6",
"text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.",
"title": ""
},
{
"docid": "neg:1840416_7",
"text": "Man-made objects usually exhibit descriptive curved features (i.e., curve networks). The curve network of an object conveys its high-level geometric and topological structure. We present a framework for extracting feature curve networks from unstructured point cloud data. Our framework first generates a set of initial curved segments fitting highly curved regions. We then optimize these curved segments to respect both data fitting and structural regularities. Finally, the optimized curved segments are extended and connected into curve networks using a clustering method. To facilitate effectiveness in case of severe missing data and to resolve ambiguities, we develop a user interface for completing the curve networks. Experiments on various imperfect point cloud data validate the effectiveness of our curve network extraction framework. We demonstrate the usefulness of the extracted curve networks for surface reconstruction from incomplete point clouds.",
"title": ""
},
{
"docid": "neg:1840416_8",
"text": "Meta-heuristic methods represent very powerful tools for dealing with hard combinatorial optimization problems. However, real life instances usually cannot be treated efficiently in \"reasonable\" computing times. Moreover, a major issue in metaheuristic design and calibration is to make them robust, i.e., to provide high performance solutions for a variety of problem settings. Parallel meta-heuristics aim to address both issues. The objective of this chapter is to present a state-of-the-art survey of the main parallel meta-heuristic ideas and strategies, and to discuss general design principles applicable to all meta-heuristic classes. To achieve this goal, we explain various paradigms related to parallel meta-heuristic development, where communications, synchronization and control aspects are the most relevant. We also discuss implementation issues, namely the influence of the target architecture on parallel execution of meta-heuristics, pointing out the characteristics of shared and distributed memory multiprocessor systems. All these topics are illustrated by examples from recent literature. These examples are related to the parallelization of various meta-heuristic methods, but we focus here on Variable Neighborhood Search and Bee Colony Optimization.",
"title": ""
},
{
"docid": "neg:1840416_9",
"text": "The known disorders of cholesterol biosynthesis have expanded rapidly since the discovery that Smith-Lemli-Opitz syndrome is caused by a deficiency of 7-dehydrocholesterol. Each of the six now recognized sterol disorders-mevalonic aciduria, Smith-Lemli-Opitz syndrome, desmosterolosis, Conradi-Hünermann syndrome, CHILD syndrome, and Greenberg dysplasia-has added to our knowledge of the relationship between cholesterol metabolism and embryogenesis. One of the most important lessons learned from the study of these disorders is that abnormal cholesterol metabolism impairs the function of the hedgehog class of embryonic signaling proteins, which help execute the vertebrate body plan during the earliest weeks of gestation. The study of the enzymes and genes in these several syndromes has also expanded and better delineated an important class of enzymes and proteins with diverse structural functions and metabolic actions that include sterol biosynthesis, nuclear transcriptional signaling, regulation of meiosis, and even behavioral modulation.",
"title": ""
},
{
"docid": "neg:1840416_10",
"text": "A developmental model of antisocial behavior is outlined. Recent findings are reviewed that concern the etiology and course of antisocial behavior from early childhood through adolescence. Evidence is presented in support of the hypothesis that the route to chronic delinquency is marked by a reliable developmental sequence of experiences. As a first step, ineffective parenting practices are viewed as determinants for childhood conduct disorders. The general model also takes into account the contextual variables that influence the family interaction process. As a second step, the conduct-disordered behaviors lead to academic failure and peer rejection. These dual failures lead, in turn, to increased risk for depressed mood and involvement in a deviant peer group. This third step usually occurs during later childhood and early adolescence. It is assumed that children following this developmental sequence are at high risk for engaging in chronic delinquent behavior. Finally, implications for prevention and intervention are discussed.",
"title": ""
},
{
"docid": "neg:1840416_11",
"text": "This paper presents a family of techniques that we call congealing for modeling image classes from data. The idea is to start with a set of images and make them appear as similar as possible by removing variability along the known axes of variation. This technique can be used to eliminate \"nuisance\" variables such as affine deformations from handwritten digits or unwanted bias fields from magnetic resonance images. In addition to separating and modeling the latent images - i.e., the images without the nuisance variables - we can model the nuisance variables themselves, leading to factorized generative image models. When nuisance variable distributions are shared between classes, one can share the knowledge learned in one task with another task, leading to efficient learning. We demonstrate this process by building a handwritten digit classifier from just a single example of each class. In addition to applications in handwritten character recognition, we describe in detail the application of bias removal from magnetic resonance images. Unlike previous methods, we use a separate, nonparametric model for the intensity values at each pixel. This allows us to leverage the data from the MR images of different patients to remove bias from each other. Only very weak assumptions are made about the distributions of intensity values in the images. In addition to the digit and MR applications, we discuss a number of other uses of congealing and describe experiments about the robustness and consistency of the method.",
"title": ""
},
{
"docid": "neg:1840416_12",
"text": "This paper presents fundamental results about how zero-curvature (paper) surfaces behave near creases and apices of cones. These entities are natural generalizations of the edges and vertices of piecewise-planar surfaces. Consequently, paper surfaces may furnish a richer and yet still tractable class of surfaces for computer-aided design and computer graphics applications than do polyhedral surfaces.",
"title": ""
},
{
"docid": "neg:1840416_13",
"text": "In this paper, we present a Self-Supervised Neural Aggregation Network (SS-NAN) for human parsing. SS-NAN adaptively learns to aggregate the multi-scale features at each pixel \"address\". In order to further improve the feature discriminative capacity, a self-supervised joint loss is adopted as an auxiliary learning strategy, which imposes human joint structures into parsing results without resorting to extra supervision. The proposed SS-NAN is end-to-end trainable. SS-NAN can be integrated into any advanced neural networks to help aggregate features regarding the importance at different positions and scales and incorporate rich high-level knowledge regarding human joint structures from a global perspective, which in turn improve the parsing results. Comprehensive evaluations on the recent Look into Person (LIP) and the PASCAL-Person-Part benchmark datasets demonstrate the significant superiority of our method over other state-of-the-arts.",
"title": ""
},
{
"docid": "neg:1840416_14",
"text": "We propose a structured prediction architecture, which exploits the local generic features extracted by Convolutional Neural Networks and the capacity of Recurrent Neural Networks (RNN) to retrieve distant dependencies. The proposed architecture, called ReSeg, is based on the recently introduced ReNet model for image classification. We modify and extend it to perform the more challenging task of semantic segmentation. Each ReNet layer is composed of four RNN that sweep the image horizontally and vertically in both directions, encoding patches or activations, and providing relevant global information. Moreover, ReNet layers are stacked on top of pre-trained convolutional layers, benefiting from generic local features. Upsampling layers follow ReNet layers to recover the original image resolution in the final predictions. The proposed ReSeg architecture is efficient, flexible and suitable for a variety of semantic segmentation tasks. We evaluate ReSeg on several widely-used semantic segmentation datasets: Weizmann Horse, Oxford Flower, and CamVid, achieving stateof-the-art performance. Results show that ReSeg can act as a suitable architecture for semantic segmentation tasks, and may have further applications in other structured prediction problems. The source code and model hyperparameters are available on https://github.com/fvisin/reseg.",
"title": ""
},
{
"docid": "neg:1840416_15",
"text": "This paper presents a hybrid algorithm for parameter estimation of synchronous generator. For large-residual problems (i.e., f(x) is large or f(x) is severely nonlinear), the performance of the Gauss-Newton method and Levenberg-Marquardt method is usually poor, and the slow convergence even causes iteration emergence divergence. The Quasi-Newton method can superlinearly converge, but it is not robust in the global stage of the iteration. Hybrid algorithm combining the two methods above is proved globally convergent with a high convergence speed through the example of synchronous generator parameter identification.",
"title": ""
},
{
"docid": "neg:1840416_16",
"text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.",
"title": ""
},
{
"docid": "neg:1840416_17",
"text": "This paper proposes a planning method based on forward path generation and backward tracking algorithm for Automatic Parking Systems, especially suitable for backward parking situations. The algorithm is based on the steering property that backward moving trajectory coincides with the forward moving trajectory for the identical steering angle. The basic path planning is divided into two segments: a collision-free locating segment and an entering segment that considers the continuous steering angles for connecting the two paths. MATLAB simulations were conducted, along with experiments involving parallel and perpendicular situations.",
"title": ""
},
{
"docid": "neg:1840416_18",
"text": "History of mental illness is a major factor behind suicide risk and ideation. However research efforts toward characterizing and forecasting this risk is limited due to the paucity of information regarding suicide ideation, exacerbated by the stigma of mental illness. This paper fills gaps in the literature by developing a statistical methodology to infer which individuals could undergo transitions from mental health discourse to suicidal ideation. We utilize semi-anonymous support communities on Reddit as unobtrusive data sources to infer the likelihood of these shifts. We develop language and interactional measures for this purpose, as well as a propensity score matching based statistical approach. Our approach allows us to derive distinct markers of shifts to suicidal ideation. These markers can be modeled in a prediction framework to identify individuals likely to engage in suicidal ideation in the future. We discuss societal and ethical implications of this research.",
"title": ""
},
{
"docid": "neg:1840416_19",
"text": "In 1984, a prospective cohort study, Coronary Artery Risk Development in Young Adults (CARDIA) was initiated to investigate life-style and other factors that influence, favorably and unfavorably, the evolution of coronary heart disease risk factors during young adulthood. After a year of planning and protocol development, 5,116 black and white women and men, age 18-30 years, were recruited and examined in four urban areas: Birmingham, Alabama; Chicago, Illinois; Minneapolis, Minnesota, and Oakland, California. The initial examination included carefully standardized measurements of major risk factors as well as assessments of psychosocial, dietary, and exercise-related characteristics that might influence them, or that might be independent risk factors. This report presents the recruitment and examination methods as well as the mean levels of blood pressure, total plasma cholesterol, height, weight and body mass index, and the prevalence of cigarette smoking by age, sex, race and educational level. Compared to recent national samples, smoking is less prevalent in CARDIA participants, and weight tends to be greater. Cholesterol levels are representative and somewhat lower blood pressures in CARDIA are probably, at least in part, due to differences in measurement methods. Especially noteworthy among several differences in risk factor levels by demographic subgroup, were a higher body mass index among black than white women and much higher prevalence of cigarette smoking among persons with no more than a high school education than among those with more education.",
"title": ""
}
] |
1840417 | An Empirical Study on the Usage of the Swift Programming Language | [
{
"docid": "pos:1840417_0",
"text": "Programming question and answer (Q&A) websites, such as Stack Overflow, leverage the knowledge and expertise of users to provide answers to technical questions. Over time, these websites turn into repositories of software engineering knowledge. Such knowledge repositories can be invaluable for gaining insight into the use of specific technologies and the trends of developer discussions. Previous work has focused on analyzing the user activities or the social interactions in Q&A websites. However, analyzing the actual textual content of these websites can help the software engineering community to better understand the thoughts and needs of developers. In the article, we present a methodology to analyze the textual content of Stack Overflow discussions. We use latent Dirichlet allocation (LDA), a statistical topic modeling technique, to automatically discover the main topics present in developer discussions. We analyze these discovered topics, as well as their relationships and trends over time, to gain insights into the development community. Our analysis allows us to make a number of interesting observations, including: the topics of interest to developers range widely from jobs to version control systems to C# syntax; questions in some topics lead to discussions in other topics; and the topics gaining the most popularity over time are web development (especially jQuery), mobile applications (especially Android), Git, and MySQL.",
"title": ""
}
] | [
{
"docid": "neg:1840417_0",
"text": "AIMS\n(a) To investigate how widespread is the use of long term treatment without improvement amongst clinicians treating individuals with low back pain. (b) To study the beliefs behind the reasons why chiropractors, osteopaths and physiotherapists continue to treat people whose low back pain appears not to be improving.\n\n\nMETHODS\nA mixed methods study, including a questionnaire survey and qualitative analysis of semi-structured interviews. Questionnaire survey; 354/600 (59%) clinicians equally distributed between chiropractic, osteopathy and physiotherapy professions. Interview study; a purposive sample of fourteen clinicians from each profession identified from the survey responses. Methodological techniques ranged from grounded theory analysis to sorting of categories by both the research team and the subjects themselves.\n\n\nRESULTS\nAt least 10% of each of the professions reported that they continued to treat patients with low back pain who showed almost no improvement for over three months. There is some indication that this is an underestimate. reasons for continuing unsuccessful management of low back pain were not found to be primarily monetary in nature; rather it appears to have much more to do with the scope of care that extends beyond issues addressed in the current physical therapy guidelines. The interview data showed that clinicians viewed their role as including health education and counselling rather than a 'cure or refer' approach. Additionally, participants raised concerns that discharging patients from their care meant sending them to into a therapeutic void.\n\n\nCONCLUSION\nLong-term treatment of patients with low back pain without objective signs of improvement is an established practice in a minority of clinicians studied. This approach contrasts with clinical guidelines that encourage self-management, reassurance, re-activation, and involvement of multidisciplinary teams for patients who do not recover. Some of the rationale provided makes a strong case for ongoing contact. However, the practice is also maintained through poor communication with other professions and mistrust of the healthcare system.",
"title": ""
},
{
"docid": "neg:1840417_1",
"text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.",
"title": ""
},
{
"docid": "neg:1840417_2",
"text": "A systematic review of randomized clinical trials was conducted to evaluate the acceptability and usefulness of computerized patient education interventions. The Columbia Registry, MEDLINE, Health, BIOSIS, and CINAHL bibliographic databases were searched. Selection was based on the following criteria: (1) randomized controlled clinical trials, (2) educational patient-computer interaction, and (3) effect measured on the process or outcome of care. Twenty-two studies met the selection criteria. Of these, 13 (59%) used instructional programs for educational intervention. Five studies (22.7%) tested information support networks, and four (18%) evaluated systems for health assessment and history-taking. The most frequently targeted clinical application area was diabetes mellitus (n = 7). All studies, except one on the treatment of alcoholism, reported positive results for interactive educational intervention. All diabetes education studies, in particular, reported decreased blood glucose levels among patients exposed to this intervention. Computerized educational interventions can lead to improved health status in several major areas of care, and appear not to be a substitute for, but a valuable supplement to, face-to-face time with physicians.",
"title": ""
},
{
"docid": "neg:1840417_3",
"text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.",
"title": ""
},
{
"docid": "neg:1840417_4",
"text": "Weexamine the problem of learning and planning on high-dimensional domains with long horizons and sparse rewards. Recent approaches have shown great successes in many Atari 2600 domains. However, domains with long horizons and sparse rewards, such as Montezuma’s Revenge and Venture, remain challenging for existing methods. Methods using abstraction [5, 13] have shown to be useful in tackling long-horizon problems. We combine recent techniques of deep reinforcement learning with existing model-based approaches using an expert-provided state abstraction. We construct toy domains that elucidate the problem of long horizons, sparse rewards and high-dimensional inputs, and show that our algorithm significantly outperforms previous methods on these domains. Our abstraction-based approach outperforms Deep QNetworks [11] on Montezuma’s Revenge and Venture, and exhibits backtracking behavior that is absent from previous methods.",
"title": ""
},
{
"docid": "neg:1840417_5",
"text": "Deep domain adaptation has emerged as a new learning technique to address the lack of massive amounts of labeled data. Compared to conventional methods, which learn shared feature subspaces or reuse important source instances with shallow representations, deep domain adaptation methods leverage deep networks to learn more transferable representations by embedding domain adaptation in the pipeline of deep learning. There have been comprehensive surveys for shallow domain adaptation, but few timely reviews the emerging deep learning based methods. In this paper, we provide a comprehensive survey of deep domain adaptation methods for computer vision applications with four major contributions. First, we present a taxonomy of different deep domain adaptation scenarios according to the properties of data that define how two domains are diverged. Second, we summarize deep domain adaptation approaches into several categories based on training loss, and analyze and compare briefly the state-of-the-art methods under these categories. Third, we overview the computer vision applications that go beyond image classification, such as face recognition, semantic segmentation and object detection. Fourth, some potential deficiencies of current methods and several future directions are highlighted.",
"title": ""
},
{
"docid": "neg:1840417_6",
"text": "Many commercial websites use recommender systems to help customers locate products and content. Modern recommenders are based on collaborative filtering: they use patterns learned from users' behavior to make recommendations, usually in the form of related-items lists. The scale and complexity of these systems, along with the fact that their outputs reveal only relationships between items (as opposed to information about users), may suggest that they pose no meaningful privacy risk. In this paper, we develop algorithms which take a moderate amount of auxiliary information about a customer and infer this customer's transactions from temporal changes in the public outputs of a recommender system. Our inference attacks are passive and can be carried out by any Internet user. We evaluate their feasibility using public data from popular websites Hunch, Last. fm, Library Thing, and Amazon.",
"title": ""
},
{
"docid": "neg:1840417_7",
"text": "Article history: Received 6 March 2008 Received in revised form 14 August 2008 Accepted 6 October 2008",
"title": ""
},
{
"docid": "neg:1840417_8",
"text": "Dissolved gas analysis (DGA) is used to assess the condition of power transformers. It uses the concentrations of various gases dissolved in the transformer oil due to decomposition of the oil and paper insulation. DGA has gained worldwide acceptance as a method for the detection of incipient faults in transformers.",
"title": ""
},
{
"docid": "neg:1840417_9",
"text": "I. Cantador (), P. Castells Universidad Autónoma de Madrid 28049 Madrid, Spain e-mails: ivan.cantador@uam.es, pablo.castells@uam.es Abstract An increasingly important type of recommender systems comprises those that generate suggestions for groups rather than for individuals. In this chapter, we revise state of the art approaches on group formation, modelling and recommendation, and present challenging problems to be included in the group recommender system research agenda in the context of the Social Web.",
"title": ""
},
{
"docid": "neg:1840417_10",
"text": "This paper summarizes the current knowledge regarding the possible modes of action and nutritional factors involved in the use of essential oils (EOs) for swine and poultry. EOs have recently attracted increased interest as feed additives to be fed to swine and poultry, possibly replacing the use of antibiotic growth promoters which have been prohibited in the European Union since 2006. In general, EOs enhance the production of digestive secretions and nutrient absorption, reduce pathogenic stress in the gut, exert antioxidant properties and reinforce the animal’s immune status, which help to explain the enhanced performance observed in swine and poultry. However, the mechanisms involved in causing this growth promotion are far from being elucidated, since data on the complex gut ecosystem, gut function, in vivo oxidative status and immune system are still lacking. In addition, limited information is available regarding the interaction between EOs and feed ingredients or other feed additives (especially pro- or prebiotics and organic acids). This knowledge may help feed formulators to better utilize EOs when they formulate diets for poultry and swine.",
"title": ""
},
{
"docid": "neg:1840417_11",
"text": "The paper presents performance analysis of modified SEPIC dc-dc converter with low input voltage and wide output voltage range. The operational analysis and the design is done for the 380W power output of the modified converter. The simulation results of modified SEPIC converter are obtained with PI controller for the output voltage. The results obtained with the modified converter are compared with the basic SEPIC converter topology for the rise time, peak time, settling time and steady state error of the output response for open loop. Voltage tracking curve is also shown for wide output voltage range. I. Introduction Dc-dc converters are widely used in regulated switched mode dc power supplies and in dc motor drive applications. The input to these converters is often an unregulated dc voltage, which is obtained by rectifying the line voltage and it will therefore fluctuate due to variations of the line voltages. Switched mode dc-dc converters are used to convert this unregulated dc input into a controlled dc output at a desired voltage level. The recent growth of battery powered applications and low voltage storage elements are increasing the demand of efficient step-up dc–dc converters. Typical applications are in adjustable speed drives, switch-mode power supplies, uninterrupted power supplies, and utility interface with nonconventional energy sources, battery energy storage systems, battery charging for electric vehicles, and power supplies for telecommunication systems etc.. These applications demand high step-up static gain, high efficiency and reduced weight, volume and cost. The step-up stage normally is the critical point for the design of high efficiency converters due to the operation with high input current and high output voltage [1]. The boost converter topology is highly effective in these applications but at low line voltage in boost converter, the switching losses are high because the input current has the maximum value and the highest step-up conversion is required. The inductor has to be oversized for the large current at low line input. As a result, a boost converter designed for universal-input applications is heavily oversized compared to a converter designed for a narrow range of input ac line voltage [2]. However, recently new non-isolated dc–dc converter topologies with basic boost are proposed, showing that it is possible to obtain high static gain, low voltage stress and low losses, improving the performance with respect to the classical topologies. Some single stage high power factor rectifiers are presented in [3-6]. A new …",
"title": ""
},
{
"docid": "neg:1840417_12",
"text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.",
"title": ""
},
{
"docid": "neg:1840417_13",
"text": "Malware sandboxes, widely used by antivirus companies, mobile application marketplaces, threat detection appliances, and security researchers, face the challenge of environment-aware malware that alters its behavior once it detects that it is being executed on an analysis environment. Recent efforts attempt to deal with this problem mostly by ensuring that well-known properties of analysis environments are replaced with realistic values, and that any instrumentation artifacts remain hidden. For sandboxes implemented using virtual machines, this can be achieved by scrubbing vendor-specific drivers, processes, BIOS versions, and other VM-revealing indicators, while more sophisticated sandboxes move away from emulation-based and virtualization-based systems towards bare-metal hosts. We observe that as the fidelity and transparency of dynamic malware analysis systems improves, malware authors can resort to other system characteristics that are indicative of artificial environments. We present a novel class of sandbox evasion techniques that exploit the \"wear and tear\" that inevitably occurs on real systems as a result of normal use. By moving beyond how realistic a system looks like, to how realistic its past use looks like, malware can effectively evade even sandboxes that do not expose any instrumentation indicators, including bare-metal systems. We investigate the feasibility of this evasion strategy by conducting a large-scale study of wear-and-tear artifacts collected from real user devices and publicly available malware analysis services. The results of our evaluation are alarming: using simple decision trees derived from the analyzed data, malware can determine that a system is an artificial environment and not a real user device with an accuracy of 92.86%. As a step towards defending against wear-and-tear malware evasion, we develop statistical models that capture a system's age and degree of use, which can be used to aid sandbox operators in creating system images that exhibit a realistic wear-and-tear state.",
"title": ""
},
{
"docid": "neg:1840417_14",
"text": "This paper proposes a new scheme for multi-image projective reconstruction based on a projective grid space. The projective grid space is defined by two basis views and the fundamental matrix relating these views. Given fundamental matrices relating other views to each of the two basis views, this projective grid space can be related to any view. In the projective grid space as a general space that is related to all images, a projective shape can be reconstructed from all the images of weakly calibrated cameras. The projective reconstruction is one way to reduce the effort of the calibration because it does not need Euclid metric information, but rather only correspondences of several points between the images. For demonstrating the effectiveness of the proposed projective grid definition, we modify the voxel coloring algorithm for the projective voxel scheme. The quality of the virtual view images re-synthesized from the projective shape demonstrates the effectiveness of our proposed scheme for projective reconstruction from a large number of images.",
"title": ""
},
{
"docid": "neg:1840417_15",
"text": "Emotionally Focused Therapy for Couples (EFT) is a brief evidence-based couple therapy based in attachment theory. Since the development of EFT, efficacy and effectiveness research has accumulated to address a range of couple concerns. EFT meets or exceeds the guidelines for classification as an evidence-based couple therapy outlined for couple and family research. Furthermore, EFT researchers have examined the process of change and predictors of outcome in EFT. Future research in EFT will continue to examine the process of change in EFT and test the efficacy and effectiveness of EFT in new applications and for couples of diverse backgrounds and concerns.",
"title": ""
},
{
"docid": "neg:1840417_16",
"text": "Photo-based activity on social networking sites has recently been identified as contributing to body image concerns. The present study aimed to investigate experimentally the effect of number of likes accompanying Instagram images on women's own body dissatisfaction. Participants were 220 female undergraduate students who were randomly assigned to view a set of thin-ideal or average images paired with a low or high number of likes presented in an Instagram frame. Results showed that exposure to thin-ideal images led to greater body and facial dissatisfaction than average images. While the number of likes had no effect on body dissatisfaction or appearance comparison, it had a positive effect on facial dissatisfaction. These effects were not moderated by Instagram involvement, but greater investment in Instagram likes was associated with more appearance comparison and facial dissatisfaction. The results illustrate how the uniquely social interactional aspects of social media (e.g., likes) can affect body image.",
"title": ""
},
{
"docid": "neg:1840417_17",
"text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics",
"title": ""
},
{
"docid": "neg:1840417_18",
"text": "Discourse coherence is strongly associated with text quality, making it important to natural language generation and understanding. Yet existing models of coherence focus on measuring individual aspects of coherence (lexical overlap, rhetorical structure, entity centering) in narrow domains. In this paper, we describe domainindependent neural models of discourse coherence that are capable of measuring multiple aspects of coherence in existing sentences and can maintain coherence while generating new sentences. We study both discriminative models that learn to distinguish coherent from incoherent discourse, and generative models that produce coherent text, including a novel neural latentvariable Markovian generative model that captures the latent discourse dependencies between sentences in a text. Our work achieves state-of-the-art performance on multiple coherence evaluations, and marks an initial step in generating coherent texts given discourse contexts.",
"title": ""
},
{
"docid": "neg:1840417_19",
"text": "The lack of an integrated medical information service model has been considered as a main issue in ensuring the continuity of healthcare from doctors, healthcare professionals to patients; the resultant unavailable, inaccurate, or unconformable healthcare information services have been recognized as main causes to the annual millions of medication errors. This paper proposes an Internet computing model aimed at providing an affordable, interoperable, ease of integration, and systematic approach to the development of a medical information service network to enable the delivery of continuity of healthcare. Web services, wireless, and advanced automatic identification technologies are fully integrated in the proposed service model. Some preliminary research results are presented.",
"title": ""
}
] |
1840418 | Bounded Rationality, Abstraction, and Hierarchical Decision-Making: An Information-Theoretic Optimality Principle | [
{
"docid": "pos:1840418_0",
"text": "We present a reformulation of the stochastic optimal control problem in terms of KL divergence minimisation, not only providing a unifying perspective of previous approaches in this area, but also demonstrating that the formalism leads to novel practical approaches to the control problem. Specifically, a natural relaxation of the dual formulation gives rise to exact iterative solutions to the finite and infinite horizon stochastic optimal control problem, while direct application of Bayesian inference methods yields instances of risk sensitive control. We furthermore study corresponding formulations in the reinforcement learning setting and present model free algorithms for problems with both discrete and continuous state and action spaces. Evaluation of the proposed methods on the standard Gridworld and Cart-Pole benchmarks verifies the theoretical insights and shows that the proposed methods improve upon current approaches.",
"title": ""
},
{
"docid": "pos:1840418_1",
"text": "Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature” by practitioners of numerical computing: 1. High-level dynamic programs have to be slow. 2. One must prototype in one language and then rewrite in another language for speed or deployment. 3. There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. We introduce the Julia programming language and its design—a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after differences are stripped away. Abstractions in mathematics are captured as code through another technique from computer science, generic programming. Julia shows that one can achieve machine performance without sacrificing human convenience.",
"title": ""
}
] | [
{
"docid": "neg:1840418_0",
"text": "The effects of beverage alcohol (ethanol) on the body are determined largely by the rate at which it and its main breakdown product, acetaldehyde, are metabolized after consumption. The main metabolic pathway for ethanol involves the enzymes alcohol dehydrogenase (ADH) and aldehyde dehydrogenase (ALDH). Seven different ADHs and three different ALDHs that metabolize ethanol have been identified. The genes encoding these enzymes exist in different variants (i.e., alleles), many of which differ by a single DNA building block (i.e., single nucleotide polymorphisms [SNPs]). Some of these SNPs result in enzymes with altered kinetic properties. For example, certain ADH1B and ADH1C variants that are commonly found in East Asian populations lead to more rapid ethanol breakdown and acetaldehyde accumulation in the body. Because acetaldehyde has harmful effects on the body, people carrying these alleles are less likely to drink and have a lower risk of alcohol dependence. Likewise, an ALDH2 variant with reduced activity results in acetaldehyde buildup and also has a protective effect against alcoholism. In addition to affecting drinking behaviors and risk for alcoholism, ADH and ALDH alleles impact the risk for esophageal cancer.",
"title": ""
},
{
"docid": "neg:1840418_1",
"text": "In Slavic languages, verbal prefixes can be applied to perfective verbs deriving new perfective verbs, and multiple prefixes can occur in a single verb. This well-known type of data has not yet been adequately analyzed within current approaches to the semantics of Slavic verbal prefixes and aspect. The notion “aspect” covers “grammatical aspect”, or “viewpoint aspect” (see Smith 1991/1997), best characterized by the formal perfective vs. imperfective distinction, which is often expressed by inflectional morphology (as in Romance languages), and corresponds to propositional operators at the semantic level of representation. It also covers “lexical aspect”, “situation aspect” (see Smith ibid.), “eventuality types” (Bach 1981, 1986), or “Aktionsart” (as in Hinrichs 1985; Van Valin 1990; Dowty 1999; Paslawska and von Stechow 2002, for example), which regards the telic vs. atelic distinction and its Vendlerian subcategories (activities, accomplishments, achievements and states). It is lexicalized by verbs, encoded by derivational morphology, or by a variety of elements at the level of syntax, among which the direct object argument has a prominent role, however, the subject (external) argument is arguably a contributing factor, as well (see Dowty 1991, for example). These two “aspect” categories are orthogonal to each other and interact in systematic ways (see also Filip 1992, 1997, 1993/99; de Swart 1998; Paslawska and von Stechow 2002; Rothstein 2003, for example). Multiple prefixation and application of verbal prefixes to perfective bases is excluded by the common view of Slavic prefixes, according to which all perfective verbs are telic and prefixes constitute a uniform class of “perfective” markers that that are applied to imperfective verbs that are atelic and derive perfective verbs that are telic. Moreover, this view of perfective verbs and prefixes predicts rampant violations of the intuitive “one delimitation per event” constraint, whenever a prefix is applied to a perfective verb. This intuitive constraint is motivated by the observation that an event expressed within a single predication can be delimited only once: cp. *run a mile for ten minutes, *wash the clothes clean white.",
"title": ""
},
{
"docid": "neg:1840418_2",
"text": "Aiming at inferring 3D shapes from 2D images, 3D shape reconstruction has drawn huge attention from researchers in computer vision and deep learning communities. However, it is not practical to assume that 2D input images and their associated ground truth 3D shapes are always available during training. In this paper, we propose a framework for semi-supervised 3D reconstruction. This is realized by our introduced 2D-3D self-consistency, which aligns the predicted 3D models and the projected 2D foreground segmentation masks. Moreover, our model not only enables recovering 3D shapes with the corresponding 2D masks, camera pose information can be jointly disentangled and predicted, even such supervision is never available during training. In the experiments, we qualitatively and quantitatively demonstrate the effectiveness of our model, which performs favorably against state-of-the-art approaches in either supervised or semi-supervised settings.",
"title": ""
},
{
"docid": "neg:1840418_3",
"text": "This paper presents a technique for word segmentation for the Urdu OCR system. Word segmentation or word tokenization is a preliminary task for Urdu language processing. Several techniques are available for word segmentation in other languages. A methodology is proposed for word segmentation in this paper which determines the boundaries of words given a sequence of ligatures, based on collocation of ligatures and words in the corpus. Using this technique, word identification rate of 96.10% is achieved, using trigram probabilities normalized over the number of ligatures and words in the sequence.",
"title": ""
},
{
"docid": "neg:1840418_4",
"text": "Portable Document Format (PDF) is one of the widely-accepted document format. However, it becomes one of the most attractive targets for exploitation by malware developers and vulnerability researchers. Malicious PDF files can be used in Advanced Persistent Threats (APTs) targeting individuals, governments, and financial sectors. The existing tools such as intrusion detection systems (IDSs) and antivirus packages are inefficient to mitigate this kind of attacks. This is because these techniques need regular updates with the new malicious PDF files which are increasing every day. In this paper, a new algorithm is presented for detecting malicious PDF files based on data mining techniques. The proposed algorithm consists of feature selection stage and classification stage. The feature selection stage is used to the select the optimum number of features extracted from the PDF file to achieve high detection rate and low false positive rate with small computational overhead. Experimental results show that the proposed algorithm can achieve 99.77% detection rate, 99.84% accuracy, and 0.05% false positive rate.",
"title": ""
},
{
"docid": "neg:1840418_5",
"text": "With the growing use of distributed information networks, there is an increasing need for algorithmic and system solutions for data-driven knowledge acquisition using distributed, heterogeneous and autonomous data repositories. In many applications, practical constraints require such systems to provide support for data analysis where the data and the computational resources are available. This presents us with distributed learning problems. We precisely formulate a class of distributed learning problems; present a general strategy for transforming traditional machine learning algorithms into distributed learning algorithms; and demonstrate the application of this strategy to devise algorithms for decision tree induction (using a variety of splitting criteria) from distributed data. The resulting algorithms are provably exact in that the decision tree constructed from distributed data is identical to that obtained by the corresponding algorithm when in the batch setting. The distributed decision tree induction algorithms have been implemented as part of INDUS, an agent-based system for data-driven knowledge acquisition from heterogeneous, distributed, autonomous data sources.",
"title": ""
},
{
"docid": "neg:1840418_6",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "neg:1840418_7",
"text": "This study formulates a two-objective model to determine the optimal liner routing, ship size, and sailing frequency for container carriers by minimizing shipping costs and inventory costs. First, shipping and inventory cost functions are formulated using an analytical method. Then, based on a trade-off between shipping costs and inventory costs, Pareto optimal solutions of the twoobjective model are determined. Not only can the optimal ship size and sailing frequency be determined for any route, but also the routing decision on whether to route containers through a hub or directly to their destination can be made in objective value space. Finally, the theoretical findings are applied to a case study, with highly reasonable results. The results show that the optimal routing, ship size, and sailing frequency with respect to each level of inventory costs and shipping costs can be determined using the proposed model. The optimal routing decision tends to be shipping the cargo through a hub as the hub charge is decreased or its efficiency improved. In addition, the proposed model not only provides a tool to analyze the trade-off between shipping costs and inventory costs, but it also provides flexibility on the decision-making for container carriers. c © 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840418_8",
"text": "Figure 3: Example of phase correlation between two microphones. The peak of this function indicates the inter-channel delay. index associated with peak value of f(t). This delay estimator is computationally convenient and more robust to noise and reverberation than other approaches based on cross-correlation or adaptive ltering. In ideal conditions, the output of Equation (5) is a delta function centered on the correct delay. In real applications with a wide band signal, e.g., a speech signal, the outcome is not a perfect delta function. Rather it resembles a correlation function of a random process. The time index associated with the maximum value of the output of Equation (5) provides an estimation of the delay. The system can produce wrong answers when two or more peaks of similar amplitude are present, i.e., in highly reverber-ant conditions. The resolution in delay estimation is limited in discrete systems by the sampling frequency. In order to increase the accuracy, oversampling can be applied in the neighborhood of the peak, to achieve sub-sample precision. Fig. 3 demonstrates an example of the result of a cross-power spectrum time delay estimator. Once the relative delays associated with all considered microphone pairs are known, the source position (x s ; y s) is estimated as the point that would produce the most similar delay values to the observed ones. This optimization is performed by a downhill sim-plex algorithm 6] applied to minimize the Euclidean distance between M observed delays ^ i and the corresponding M theoretical delays i : An analysis of the impulse responses associated with all the microphones, given an acoustic source emitting at a speciic position, has shown that constructive interference phenomena occur in the presence of signiicant reverberation. In some cases, the direct wavefront happens to be weaker than a coincidence of reeections, inducing a wrong estimation of the arrival direction and leading to an incorrect result. Selecting only microphone pairs that show the highest peaks of phase correlation generally alleviates this problem. Location results obtained with this strategy show comparable performance (mean posi-Reverb. Time Average Error 10 mic pairs 4 mic pairs 0.1sec 38.4 cm 29.8 cm 0.6sec 51.3 cm 32.1 cm 1.7sec 105.0 cm 46.4 cm Table 1: Average location error using either all 10 pairs or 4 pairs of microphones. Three reverberation time conditions are considered. tion error of about 0.3 m) at reverberation times of 0.1 s and 0.6 s. …",
"title": ""
},
{
"docid": "neg:1840418_9",
"text": "Transseptal catheterization is a vital component of percutaneous transvenous mitral commissurotomy. Therefore, a well-executed transseptal catheterization is the key to a safe and successful percutaneous transvenous mitral commissurotomy. Two major problems inherent in atrial septal puncture for percutaneous transvenous mitral commissurotomy are cardiac perforation and puncture of an inappropriate atrial septal site. The former may lead to serious complication of cardiac tamponade and the latter to possible difficulty in maneuvering the Inoue balloon catheter across the mitral orifice. This article details atrial septal puncture technique, including landmark selection for optimal septal puncture sites, avoidance of inappropriate puncture sites, and step-by-step description of atrial septal puncture.",
"title": ""
},
{
"docid": "neg:1840418_10",
"text": "Some new parameters in Vivaldi Notch antennas are debated over in this paper. They can be availed for the bandwidth application amelioration. The aforementioned limiting factors comprise two parameters for the radial stub dislocation, one parameter for the stub opening angle, and one parameter for the stub’s offset angle. The aforementioned parameters are rectified by means of the optimization algorithm to accomplish a better frequency application. The results obtained in this article will eventually be collated with those of the other similar antennas. The best achieved bandwidth in this article is 17.1 GHz.",
"title": ""
},
{
"docid": "neg:1840418_11",
"text": "The Internet of Things (IoT) is expected to substantially support sustainable development of future smart cities. This article identifies the main issues that may prevent IoT from playing this crucial role, such as the heterogeneity among connected objects and the unreliable nature of associated services. To solve these issues, a cognitive management framework for IoT is proposed, in which dynamically changing real-world objects are represented in a virtualized environment, and where cognition and proximity are used to select the most relevant objects for the purpose of an application in an intelligent and autonomic way. Part of the framework is instantiated in terms of building blocks and demonstrated through a smart city scenario that horizontally spans several application domains. This preliminary proof of concept reveals the high potential that self-reconfigurable IoT can achieve in the context of smart cities.",
"title": ""
},
{
"docid": "neg:1840418_12",
"text": "This paper describes the three methodologies used by CALCE in their winning entry for the IEEE 2012 PHM Data Challenge competition. An experimental data set from seventeen ball bearings was provided by the FEMTO-ST Institute. The data set consisted of data from six bearings for algorithm training and data from eleven bearings for testing. The authors developed prognostic algorithms based on the data from the training bearings to estimate the remaining useful life of the test bearings. Three methodologies are presented in this paper. Result accuracies of the winning methodology are presented.",
"title": ""
},
{
"docid": "neg:1840418_13",
"text": "Exploiting the security vulnerabilities in web browsers, web applications and firewalls is a fundamental trait of cross-site scripting (XSS) attacks. Majority of web population with basic web awareness are vulnerable and even expert web users may not notice the attack to be able to respond in time to neutralize the ill effects of attack. Due to their subtle nature, a victimized server, a compromised browser, an impersonated email or a hacked web application tends to keep this form of attacks alive even in the present times. XSS attacks severely offset the benefits offered by Internet based services thereby impacting the global internet community. This paper focuses on defense, detection and prevention mechanisms to be adopted at various network doorways to neutralize XSS attacks using open source tools.",
"title": ""
},
{
"docid": "neg:1840418_14",
"text": "In an effort to overcome the data deluge in computational biology and bioinformatics and to facilitate bioinformatics research in the era of big data, we identify some of the most influential algorithms that have been widely used in the bioinformatics community. These top data mining and machine learning algorithms cover classification, clustering, regression, graphical model-based learning, and dimensionality reduction. The goal of this study is to guide the focus of scalable computing experts in the endeavor of applying new storage and scalable computation designs to bioinformatics algorithms that merit their attention most, following the engineering maxim of “optimize the common case”.",
"title": ""
},
{
"docid": "neg:1840418_15",
"text": "This paper presents an extension to the technology acceptance model (TAM) and empirically examines it in an enterprise resource planning (ERP) implementation environment. The study evaluated the impact of one belief construct (shared beliefs in the benefits of a technology) and two widely recognized technology implementation success factors (training and communication) on the perceived usefulness and perceived ease of use during technology implementation. Shared beliefs refer to the beliefs that organizational participants share with their peers and superiors on the benefits of the ERP system. Using data gathered from the implementation of an ERP system, we showed that both training and project communication influence the shared beliefs that users form about the benefits of the technology and that the shared beliefs influence the perceived usefulness and ease of use of the technology. Thus, we provided empirical and theoretical support for the use of managerial interventions, such as training and communication, to influence the acceptance of technology, since perceived usefulness and ease of use contribute to behavioral intention to use the technology. # 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840418_16",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "neg:1840418_17",
"text": "A review of both laboratory and field studies on the effects of setting goals when performing a task found that in 90% of the studies, specific and challenging goals lead to higher performance than easy goals, \"do your best\" goals, or no goals. Goals affect performance by directing attention, mobilizing effort, increasing persistence, and motivating strategy development. Goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, the subjects have sufficient ability (and ability differences are controlled), feedback is provided to show progress in relation to the goal, rewards such as money are given for goal attainment, the experimenter or manager is supportive, and assigned goals are accepted by the individual. No reliable individual differences have emerged in goal-setting studies, probably because the goals were typically assigned rather than self-set. Need for achievement and self-esteem may be the most promising individual difference variables.",
"title": ""
},
{
"docid": "neg:1840418_18",
"text": "Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need to be learned from data, the deficiencies of learned models have limited their utility for planning. To address these challenges, we propose TreeQN, a differentiable, recursive, tree-structured model that serves as a drop-in replacement for any value function network in deep RL with discrete actions. TreeQN dynamically constructs a tree by recursively applying a transition model in a learned abstract state space and then aggregating predicted rewards and state-values using a tree backup to estimate Q-values. We also propose ATreeC, an actor-critic variant that augments TreeQN with a softmax layer to form a stochastic policy network. Both approaches are trained end-to-end, such that the learned model is optimised for its actual use in the planner. We show that TreeQN and ATreeC outperform n-step DQN and A2C on a box-pushing task, as well as n-step DQN and value prediction networks (Oh et al., 2017) on multiple Atari games, with deeper trees often outperforming shallower ones. We also present a qualitative analysis that sheds light on the trees learned by TreeQN.",
"title": ""
}
] |
1840419 | Multimodal Network Embedding via Attention based Multi-view Variational Autoencoder | [
{
"docid": "pos:1840419_0",
"text": "Network representation is the basis of many applications and of extensive interest in various fields, such as information retrieval, social network analysis, and recommendation systems. Most previous methods for network representation only consider the incomplete aspects of a problem, including link structure, node information, and partial integration. The present study introduces a deep network representation model that seamlessly integrates the text information and structure of a network. The model captures highly non-linear relationships between nodes and complex features of a network by exploiting the variational autoencoder (VAE), which is a deep unsupervised generation algorithm. The representation learned with a paragraph vector model is merged with that learned with the VAE to obtain the network representation, which preserves both structure and text information. Comprehensive experiments is conducted on benchmark datasets and find that the introduced model performs better than state-of-the-art techniques.",
"title": ""
},
{
"docid": "pos:1840419_1",
"text": "This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of reasoning. Thus, we develop a multiple-layer SAN in which we query an image multiple times to infer the answer progressively. Experiments conducted on four image QA data sets demonstrate that the proposed SANs significantly outperform previous state-of-the-art approaches. The visualization of the attention layers illustrates the progress that the SAN locates the relevant visual clues that lead to the answer of the question layer-by-layer.",
"title": ""
},
{
"docid": "pos:1840419_2",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
},
{
"docid": "pos:1840419_3",
"text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.",
"title": ""
}
] | [
{
"docid": "neg:1840419_0",
"text": "The Pemberton Happiness Index (PHI) is a recently developed integrative measure of well-being that includes components of hedonic, eudaimonic, social, and experienced well-being. The PHI has been validated in several languages, but not in Portuguese. Our aim was to cross-culturally adapt the Universal Portuguese version of the PHI and to assess its psychometric properties in a sample of the Brazilian population using online surveys.An expert committee evaluated 2 versions of the PHI previously translated into Portuguese by the original authors using a standardized form for assessment of semantic/idiomatic, cultural, and conceptual equivalence. A pretesting was conducted employing cognitive debriefing methods. In sequence, the expert committee evaluated all the documents and reached a final Universal Portuguese PHI version. For the evaluation of the psychometric properties, the data were collected using online surveys in a cross-sectional study. The study population included healthcare professionals and users of the social network site Facebook from several Brazilian geographic areas. In addition to the PHI, participants completed the Satisfaction with Life Scale (SWLS), Diener and Emmons' Positive and Negative Experience Scale (PNES), Psychological Well-being Scale (PWS), and the Subjective Happiness Scale (SHS). Internal consistency, convergent validity, known-group validity, and test-retest reliability were evaluated. Satisfaction with the previous day was correlated with the 10 items assessing experienced well-being using the Cramer V test. Additionally, a cut-off value of PHI to identify a \"happy individual\" was defined using receiver-operating characteristic (ROC) curve methodology.Data from 1035 Brazilian participants were analyzed (health professionals = 180; Facebook users = 855). Regarding reliability results, the internal consistency (Cronbach alpha = 0.890 and 0.914) and test-retest (intraclass correlation coefficient = 0.814) were both considered adequate. Most of the validity hypotheses formulated a priori (convergent and know-group) was further confirmed. The cut-off value of higher than 7 in remembered PHI was identified (AUC = 0.780, sensitivity = 69.2%, specificity = 78.2%) as the best one to identify a happy individual.We concluded that the Universal Portuguese version of the PHI is valid and reliable for use in the Brazilian population using online surveys.",
"title": ""
},
{
"docid": "neg:1840419_1",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "neg:1840419_2",
"text": "BACKGROUND\nProspective data from over 10 years of follow-up were used to examine neighbourhood deprivation, social fragmentation and trajectories of health.\n\n\nMETHODS\nFrom the third phase (1991-93) of the Whitehall II study of British civil servants, SF-36 health functioning was measured on up to five occasions for 7834 participants living in 2046 census wards. Multilevel linear regression models assessed the Townsend deprivation index and social fragmentation index as predictors of initial health and health trajectories.\n\n\nRESULTS\nIndependent of individual socioeconomic factors, deprivation was inversely associated with initial SF-36 physical component summary (PCS) score. Social fragmentation was not associated with PCS scores. Deprivation and social fragmentation were inversely associated with initial mental component summary (MCS) score. Neighbourhood characteristics were not associated with trajectories of PCS score or MCS score for the whole set. However, restricted analysis on longer term residents revealed that residents in deprived or socially fragmented neighbourhoods had lowest initial and smallest improvements in MCS score.\n\n\nCONCLUSIONS\nThis longitudinal study provides evidence that residence in a deprived or fragmented neighbourhood is associated with poorer mental health and that longer exposure to such neighbourhood environments has incremental effects. Associations between physical health functioning and neighbourhood characteristics were less clear. Mindful of the importance of individual socioeconomic factors, the findings warrant more detailed examination of materially and socially deprived neighbourhoods and their consequences for health.",
"title": ""
},
{
"docid": "neg:1840419_3",
"text": "We provide sets of parameters for multiplicative linear congruential generators (MLCGs) of different sizes and good performance with respect to the spectral test. For ` = 8, 9, . . . , 64, 127, 128, we take as a modulus m the largest prime smaller than 2`, and provide a list of multipliers a such that the MLCG with modulus m and multiplier a has a good lattice structure in dimensions 2 to 32. We provide similar lists for power-of-two moduli m = 2`, for multiplicative and non-multiplicative LCGs.",
"title": ""
},
{
"docid": "neg:1840419_4",
"text": "Induction motor especially three phase induction motor plays vital role in the industry due to their advantages over other electrical motors. Therefore, there is a strong demand for their reliable and safe operation. If any fault and failures occur in the motor it can lead to excessive downtimes and generate great losses in terms of revenue and maintenance. Therefore, an early fault detection is needed for the protection of the motor. In the current scenario, the health monitoring of the induction motor are increasing due to its potential to reduce operating costs, enhance the reliability of operation and improve service to the customers. The health monitoring of induction motor is an emerging technology for online detection of incipient faults. The on-line health monitoring involves taking measurements on a machine while it is in operating conditions in order to detect faults with the aim of reducing both unexpected failure and maintenance costs. In the present paper, a comprehensive survey of induction machine faults, diagnostic methods and future aspects in the health monitoring of induction motor has been discussed.",
"title": ""
},
{
"docid": "neg:1840419_5",
"text": "In this paper, a mobile robot with a tetrahedral shape for its basic structure is presented as a thrown robot for search and rescue robot application. The Tetrahedral Mobile Robot has its body in the center of the whole structure. The driving parts that produce the propelling force are located at each corner. As a driving wheel mechanism, we have developed the \"Omni-Ball\" with one active and two passive rotational axes, which are explained in detail. An actual prototype model has been developed to illustrate the concept and to perform preliminary motion experiments, through which the basic performance of the Tetrahedral Mobile Robot was confirmed",
"title": ""
},
{
"docid": "neg:1840419_6",
"text": "This paper presents a new approach to find energy-efficient motion plans for mobile robots. Motion planning has two goals: finding the routes and determining the velocities. We model the relationship of motors' speed and their power consumption with polynomials. The velocity of the robot is related to its wheels' velocities by performing a linear transformation. We compare the energy consumption of different routes at different velocities and consider the energy consumed for acceleration and turns. We use experiment-validated simulation to demonstrate up to 51% energy savings for searching an open area.",
"title": ""
},
{
"docid": "neg:1840419_7",
"text": "Ameloblastic fibrosarcoma is a mixed odontogenic tumor that can originate de novo or from a transformed ameloblastic fibroma. This report describes the case of a 34-year-old woman with a recurrent, rapidly growing, debilitating lesion. This lesion appeared as a large painful mandibular swelling that filled the oral cavity and extended to the infratemporal fossa. The lesion had been previously misdiagnosed as ameloblastoma. Twenty months after final surgery and postoperative chemotherapy, lung metastases were diagnosed after she reported respiratory signs and symptoms.",
"title": ""
},
{
"docid": "neg:1840419_8",
"text": "Distribution transformers are one of the most important equipment in power network. Because of, the large number of transformers distributed over a wide area in power electric systems, the data acquisition and condition monitoring is a important issue. This paper presents design and implementation of a mobile embedded system and a novel software to monitor and diagnose condition of transformers, by record key operation indictors of a distribution transformer like load currents, transformer oil, ambient temperatures and voltage of three phases. The proposed on-line monitoring system integrates a Global Service Mobile (GSM) Modem, with stand alone single chip microcontroller and sensor packages. Data of operation condition of transformer receives in form of SMS (Short Message Service) and will be save in computer server. Using the suggested online monitoring system will help utility operators to keep transformers in service for longer of time.",
"title": ""
},
{
"docid": "neg:1840419_9",
"text": "The human brain automatically attempts to interpret the physical visual inputs from our eyes in terms of plausible motion of the viewpoint and/or of the observed object or scene [Ellis 1938; Graham 1965; Giese and Poggio 2003]. In the physical world, the rules that define plausible motion are set by temporal coherence, parallax, and perspective projection. Our brain, however, refuses to feel constrained by the unrelenting laws of physics in what it deems plausible motion. Image metamorphosis experiments, in which unnatural, impossible in-between images are interpolated, demonstrate that under certain circumstances, we willingly accept chimeric images as plausible transition stages between images of actual, known objects [Beier and Neely 1992; Seitz and Dyer 1996]. Or think of cartoon animations which for the longest time were hand-drawn pieces of art that didn't need to succumb to physical correctness. The goal of our work is to exploit this freedom of perception for space-time interpolation, i.e., to generate transitions between still images that our brain accepts as plausible motion in a moving 3D world.",
"title": ""
},
{
"docid": "neg:1840419_10",
"text": "Nowadays PDF documents have become a dominating knowledge repository for both the academia and industry largely because they are very convenient to print and exchange. However, the methods of automated structure information extraction are yet to be fully explored and the lack of effective methods hinders the information reuse of the PDF documents. To enhance the usability for PDF-formatted electronic books, we propose a novel computational framework to analyze the underlying physical structure and logical structure. The analysis is conducted at both page level and document level, including global typographies, reading order, logical elements, chapter/section hierarchy and metadata. Moreover, two characteristics of PDF-based books, i.e., style consistency in the whole book document and natural rendering order of PDF files, are fully exploited in this paper to improve the conventional image-based structure extraction methods. This paper employs the bipartite graph as a common structure for modeling various tasks, including reading order recovery, figure and caption association, and metadata extraction. Based on the graph representation, the optimal matching (OM) method is utilized to find the global optima in those tasks. Extensive benchmarking using real-world data validates the high efficiency and discrimination ability of the proposed method.",
"title": ""
},
{
"docid": "neg:1840419_11",
"text": "Computer-aided diagnosis of neural diseases from EEG signals (or other physiological signals that can be treated as time series, e.g., MEG) is an emerging field that has gained much attention in past years. Extracting features is a key component in the analysis of EEG signals. In our previous works, we have implemented many EEG feature extraction functions in the Python programming language. As Python is gaining more ground in scientific computing, an open source Python module for extracting EEG features has the potential to save much time for computational neuroscientists. In this paper, we introduce PyEEG, an open source Python module for EEG feature extraction.",
"title": ""
},
{
"docid": "neg:1840419_12",
"text": "Relativistic electron beam generation studies have been carried out in LIA-400 system through explosive electron emission for various cathode materials. This paper presents the emission properties of different cathode materials at peak diode voltages varying from 10 to 220 kV and at peak current levels from 0.5 to 2.2 kA in a single pulse duration of 160-180 ns. The cathode materials used are graphite, stainless steel, and red polymer velvet. The perveance data calculated from experimental waveforms are compared with 1-D Child Langmuir formula to obtain the cathode plasma expansion velocity for various cathode materials. Various diode parameters are subject to shot to shot variation analysis. Velvet cathode proves to be the best electron emitter because of its lower plasma expansion velocity and least shot to shot variability.",
"title": ""
},
{
"docid": "neg:1840419_13",
"text": "In many real-world tasks, there are abundant unlabeled examples but the number of labeled training examples is limited, because labeling the examples requires human efforts and expertise. So, semi-supervised learning which tries to exploit unlabeled examples to improve learning performance has become a hot topic. Disagreement-based semi-supervised learning is an interesting paradigm, where multiple learners are trained for the task and the disagreements among the learners are exploited during the semi-supervised learning process. This survey article provides an introduction to research advances in this paradigm.",
"title": ""
},
{
"docid": "neg:1840419_14",
"text": "This priming study investigates the role of conceptual structure during language production, probing whether English speakers are sensitive to the structure of the event encoded by a prime sentence. In two experiments, participants read prime sentences aloud before describing motion events. Primes differed in 1) syntactic frame, 2) degree of lexical and conceptual overlap with target events, and 3) distribution of event components within frames. Results demonstrate that conceptual overlap between primes and targets led to priming of (a) the information that speakers chose to include in their descriptions of target events, (b) the way that information was mapped to linguistic elements, and (c) the syntactic structures that were built to communicate that information. When there was no conceptual overlap between primes and targets, priming was not successful. We conclude that conceptual structure is a level of representation activated during priming, and that it has implications for both Message Planning and Linguistic Formulation.",
"title": ""
},
{
"docid": "neg:1840419_15",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "neg:1840419_16",
"text": "Vehicular Ad hoc Networks is a special kind of mobile ad hoc network to provide communication among nearby vehicles and between vehicles and nearby fixed equipments. VANETs are mainly used for improving efficiency and safety of (future) transportation. There are chances of a number of possible attacks in VANET due to open nature of wireless medium. In this paper, we have classified these security attacks and logically organized/represented in a more lucid manner based on the level of effect of a particular security attack on intelligent vehicular traffic. Also, an effective solution is proposed for DOS based attacks which use the redundancy elimination mechanism consists of rate decreasing algorithm and state transition mechanism as its components. This solution basically adds a level of security to its already existing solutions of using various alternative options like channel-switching, frequency-hopping, communication technology switching and multiple-radio transceivers to counter affect the DOS attacks. Proposed scheme enhances the security in VANETs without using any cryptographic scheme.",
"title": ""
},
{
"docid": "neg:1840419_17",
"text": "Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.",
"title": ""
},
{
"docid": "neg:1840419_18",
"text": "Detecting and recognizing text in natural scene images is a challenging, yet not completely solved task. In recent years several new systems that try to solve at least one of the two sub-tasks (text detection and text recognition) have been proposed. In this paper we present SEE, a step towards semi-supervised neural networks for scene text detection and recognition, that can be optimized end-to-end. Most existing works consist of multiple deep neural networks and several pre-processing steps. In contrast to this, we propose to use a single deep neural network, that learns to detect and recognize text from natural images, in a semi-supervised way. SEE is a network that integrates and jointly learns a spatial transformer network, which can learn to detect text regions in an image, and a text recognition network that takes the identified text regions and recognizes their textual content. We introduce the idea behind our novel approach and show its feasibility, by performing a range of experiments on standard benchmark datasets, where we achieve competitive results.",
"title": ""
}
] |
1840420 | Evaluating Student Satisfaction with Blended Learning in a Gender-Segregated Environment | [
{
"docid": "pos:1840420_0",
"text": "H istorically, retention of distance learners has been problematic with dropout rates disproportionably high compared to traditional course settings (Richards & Ridley, 1997; Wetzel, Radtke, & Stern, 1994). Dropout rates of 30 to 50% have been common (Moore & Kearsley, 1996). Students may experience feelings of isolation in distance courses compared to prior faceto-face educational experiences (Shaw & Polovina, 1999). If the distance courses feature limited contact with instructors and fellow students, the result of this isolation can be unfinished courses or degrees (Keegan, 1990). Student satisfaction in traditional learning environments has been overlooked in the past (Astin, 1993; DeBourgh, 1999; Navarro & Shoemaker, 2000). Student satisfaction has also not been given the proper attention in distance learning environments (Biner, Dean, & Mellinger, 1994). Richards and Ridley (1997) suggested further research is necessary to study factors affecting student enrollment and satisfaction. Prior studies in classroom-based courses have shown there is a high correlation between student satisfaction and retention (Astin, 1993; Edwards & Waters, 1982). This high correlation has also been found in studies in which distance learners were the target population (Bailey, Bauman, & Lata, 1998). The purpose of this study was to identify factors influencing student satisfaction in online courses, and to create and validate an instrument to measure student satisfaction in online courses.",
"title": ""
}
] | [
{
"docid": "neg:1840420_0",
"text": "The richness of visual details in most computer graphics images nowadays is largely due to the extensive use of texture mapping techniques. Texture mapping is the main tool in computer graphics to integrate a given shape to a given pattern. Despite its power it has problems and limitations. Current solutions cannot handle complex shapes properly. The de nition of the mapping function and problems like distortions can turn the process into a very cumbersome one for the application programmer and consequently for the nal user. An associated problem is the synthesis of patterns which are used as texture. The available options are usually limited to scanning in real pictures. This document is a PhD proposal to investigate techniques to integrate complex shapes and patterns which will not only overcome problems usually associated with texture mapping but also give us more control and make less ad hoc the task of combining shape and pattern. We break the problem into three parts: modeling of patterns, modeling of shape and integration. The integration step will use common information to drive both the modeling of patterns and shape in an integrated manner. Our approach is inspired by observations on how these processes happen in real life, where there is no pattern without a shape associated with it. The proposed solutions will hopefully extent the generality, applicability and exibility of existing integration methods in computer graphics. iii Table of",
"title": ""
},
{
"docid": "neg:1840420_1",
"text": "A cascade of sigma-delta modulator stages that employ a feedforward architecture to reduce the signal ranges required at the integrator inputs and outputs has been used to implement a broadband, high-resolution oversampling CMOS analog-to-digital converter capable of operating from low-supply voltages. An experimental prototype of the proposed architecture has been integrated in a 0.25-/spl mu/m CMOS technology and operates from an analog supply of only 1.2 V. At a sampling rate of 40 MSamples/sec, it achieves a dynamic range of 96 dB for a 1.25-MHz signal bandwidth. The analog power dissipation is 44 mW.",
"title": ""
},
{
"docid": "neg:1840420_2",
"text": "Previous research has shown that people differ in their implicit theories about the essential characteristics of intelligence and emotions. Some people believe these characteristics to be predetermined and immutable (entity theorists), whereas others believe that these characteristics can be changed through learning and behavior training (incremental theorists). The present study provides evidence that in healthy adults (N = 688), implicit beliefs about emotions and emotional intelligence (EI) may influence performance on the ability-based Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Adults in our sample with incremental theories about emotions and EI scored higher on the MSCEIT than entity theorists, with implicit theories about EI showing a stronger relationship to scores than theories about emotions. Although our participants perceived both emotion and EI as malleable, they viewed emotions as more malleable than EI. Women and young adults in general were more likely to be incremental theorists than men and older adults. Furthermore, we found that emotion and EI theories mediated the relationship of gender and age with ability EI. Our findings suggest that people's implicit theories about EI may influence their emotional abilities, which may have important consequences for personal and professional EI training.",
"title": ""
},
{
"docid": "neg:1840420_3",
"text": "Frequent action video game players often outperform non-gamers on measures of perception and cognition, and some studies find that video game practice enhances those abilities. The possibility that video game training transfers broadly to other aspects of cognition is exciting because training on one task rarely improves performance on others. At first glance, the cumulative evidence suggests a strong relationship between gaming experience and other cognitive abilities, but methodological shortcomings call that conclusion into question. We discuss these pitfalls, identify how existing studies succeed or fail in overcoming them, and provide guidelines for more definitive tests of the effects of gaming on cognition.",
"title": ""
},
{
"docid": "neg:1840420_4",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "neg:1840420_5",
"text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.",
"title": ""
},
{
"docid": "neg:1840420_6",
"text": "In order to acquire a lexicon, young children must segment speech into words, even though most words are unfamiliar to them. This is a non-trivial task because speech lacks any acoustic analog of the blank spaces between printed words. Two sources of information that might be useful for this task are distributional regularity and phonotactic constraints. Informally, distributional regularity refers to the intuition that sound sequences that occur frequently and in a variety of contexts are better candidates for the lexicon than those that occur rarely or in few contexts. We express that intuition formally by a class of functions called DR functions. We then put forth three hypotheses: First, that children segment using DR functions. Second, that they exploit phonotactic constraints on the possible pronunciations of words in their language. Specifically, they exploit both the requirement that every word must have a vowel and the constraints that languages impose on word-initial and word-final consonant clusters. Third, that children learn which word-boundary clusters are permitted in their language by assuming that all permissible word-boundary clusters will eventually occur at utterance boundaries. Using computational simulation, we investigate the effectiveness of these strategies for segmenting broad phonetic transcripts of child-directed English. The results show that DR functions and phonotactic constraints can be used to significantly improve segmentation. Further, the contributions of DR functions and phonotactic constraints are largely independent, so using both yields better segmentation than using either one alone. Finally, learning the permissible word-boundary clusters from utterance boundaries does not degrade segmentation performance.",
"title": ""
},
{
"docid": "neg:1840420_7",
"text": "The professional norms of good journalism include in particular the following: truthfulness, objectivity, neutrality and detachment. For Public Relations these norms are at best irrelevant. The only thing that matters is success. And this success is measured in terms ofachieving specific communication aims which are \"externally defined by a client, host organization or particular groups ofstakeholders\" (Hanitzsch, 2007, p. 2). Typical aims are, e.g., to convince the public of the attractiveness of a product, of the justice of one's own political goals or also of the wrongfulness of a political opponent.",
"title": ""
},
{
"docid": "neg:1840420_8",
"text": "A model is proposed that specifies the conditions under which individuals will become internally motivated to perform effectively on their jobs. The model focuses on the interaction among three classes of variables: (a) the psychological states of employees that must be present for internally motivated work behavior to develop; (b) the characteristics of jobs that can create these psychological states; and (c) the attributes of individuals that determine how positively a person will respond to a complex and challenging job. The model was tested for 658 employees who work on 62 different jobs in seven organizations, and results support its validity. A number of special features of the model are discussed (including its use as a basis for the diagnosis of jobs and the evaluation of job redesign projects), and the model is compared to other theories of job design.",
"title": ""
},
{
"docid": "neg:1840420_9",
"text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.",
"title": ""
},
{
"docid": "neg:1840420_10",
"text": "Although graph embedding has been a powerful tool for modeling data intrinsic structures, simply employing all features for data structure discovery may result in noise amplification. This is particularly severe for high dimensional data with small samples. To meet this challenge, this paper proposes a novel efficient framework to perform feature selection for graph embedding, in which a category of graph embedding methods is cast as a least squares regression problem. In this framework, a binary feature selector is introduced to naturally handle the feature cardinality in the least squares formulation. The resultant integral programming problem is then relaxed into a convex Quadratically Constrained Quadratic Program (QCQP) learning problem, which can be efficiently solved via a sequence of accelerated proximal gradient (APG) methods. Since each APG optimization is w.r.t. only a subset of features, the proposed method is fast and memory efficient. The proposed framework is applied to several graph embedding learning problems, including supervised, unsupervised, and semi-supervised graph embedding. Experimental results on several high dimensional data demonstrated that the proposed method outperformed the considered state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840420_11",
"text": "Controller Area Network (CAN) is the leading serial bus system for embedded control. More than two billion CAN nodes have been sold since the protocol's development in the early 1980s. CAN is a mainstream network and was internationally standardized (ISO 11898–1) in 1993. This paper describes an approach to implementing security services on top of a higher level Controller Area Network (CAN) protocol, in particular, CANopen. Since the CAN network is an open, unsecured network, every node has access to all data on the bus. A system which produces and consumes sensitive data is not well suited for this environment. Therefore, a general-purpose security solution is needed which will allow secure nodes access to the basic security services such as authentication, integrity, and confidentiality.",
"title": ""
},
{
"docid": "neg:1840420_12",
"text": "This paper provides a brief introduction to recent work in st atistical parsing and its applications. We highlight succes ses to date, remaining challenges, and promising future work.",
"title": ""
},
{
"docid": "neg:1840420_13",
"text": "Gene expression is a fundamentally stochastic process, with randomness in transcription and translation leading to cell-to-cell variations in mRNA and protein levels. This variation appears in organisms ranging from microbes to metazoans, and its characteristics depend both on the biophysical parameters governing gene expression and on gene network structure. Stochastic gene expression has important consequences for cellular function, being beneficial in some contexts and harmful in others. These situations include the stress response, metabolism, development, the cell cycle, circadian rhythms, and aging.",
"title": ""
},
{
"docid": "neg:1840420_14",
"text": "BACKGROUND\nHigh-grade intraepithelial neoplasia is known to progress to invasive squamous-cell carcinoma of the anus. There are limited reports on the rate of progression from high-grade intraepithelial neoplasia to anal cancer in HIV-positive men who have sex with men.\n\n\nOBJECTIVES\nThe purpose of this study was to describe in HIV-positive men who have sex with men with perianal high-grade intraepithelial neoplasia the rate of progression to anal cancer and the factors associated with that progression.\n\n\nDESIGN\nThis was a prospective cohort study.\n\n\nSETTINGS\nThe study was conducted at an outpatient clinic at a tertiary care center in Toronto.\n\n\nPATIENTS\nThirty-eight patients with perianal high-grade anal intraepithelial neoplasia were identified among 550 HIV-positive men who have sex with men.\n\n\nINTERVENTION\nAll of the patients had high-resolution anoscopy for symptoms, screening, or surveillance with follow-up monitoring/treatment.\n\n\nMAIN OUTCOME MEASURES\nWe measured the incidence of anal cancer per 100 person-years of follow-up.\n\n\nRESULTS\nSeven (of 38) patients (18.4%) with perianal high-grade intraepithelial neoplasia developed anal cancer. The rate of progression was 6.9 (95% CI, 2.8-14.2) cases of anal cancer per 100 person-years of follow-up. A diagnosis of AIDS, previously treated anal cancer, and loss of integrity of the lesion were associated with progression. Anal bleeding was more than twice as common in patients who progressed to anal cancer.\n\n\nLIMITATIONS\nThere was the potential for selection bias and patients were offered treatment, which may have affected incidence estimates.\n\n\nCONCLUSIONS\nHIV-positive men who have sex with men should be monitored for perianal high-grade intraepithelial neoplasia. Those with high-risk features for the development of anal cancer may need more aggressive therapy.",
"title": ""
},
{
"docid": "neg:1840420_15",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "neg:1840420_16",
"text": "Automatic analysis of human facial expression is a challenging problem with many applications. Most of the existing automated systems for facial expression analysis attempt to recognize a few prototypic emotional expressions, such as anger and happiness. Instead of representing another approach to machine analysis of prototypic facial expressions of emotion, the method presented in this paper attempts to handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. Virtually all of the existing vision systems for facial muscle action detection deal only with frontal-view face images and cannot handle temporal dynamics of facial actions. In this paper, we present a system for automatic recognition of facial action units (AUs) and their temporal models from long, profile-view face image sequences. We exploit particle filtering to track 15 facial points in an input face-profile sequence, and we introduce facial-action-dynamics recognition from continuous video input using temporal rules. The algorithm performs both automatic segmentation of an input video into facial expressions pictured and recognition of temporal segments (i.e., onset, apex, offset) of 27 AUs occurring alone or in a combination in the input face-profile video. A recognition rate of 87% is achieved.",
"title": ""
},
{
"docid": "neg:1840420_17",
"text": "We advocate the use of implicit fields for learning generative models of shapes and introduce an implicit field decoder for shape generation, aimed at improving the visual quality of the generated shapes. An implicit field assigns a value to each point in 3D space, so that a shape can be extracted as an iso-surface. Our implicit field decoder is trained to perform this assignment by means of a binary classifier. Specifically, it takes a point coordinate, along with a feature vector encoding a shape, and outputs a value which indicates whether the point is outside the shape or not. By replacing conventional decoders by our decoder for representation learning and generative modeling of shapes, we demonstrate superior results for tasks such as shape autoencoding, generation, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.",
"title": ""
},
{
"docid": "neg:1840420_18",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "neg:1840420_19",
"text": "In this paper, we introduce a novel stereo-monocular fusion approach to on-road localization and tracking of vehicles. Utilizing a calibrated stereo-vision rig, the proposed approach combines monocular detection with stereo-vision for on-road vehicle localization and tracking for driver assistance. The system initially acquires synchronized monocular frames and calculates depth maps from the stereo rig. The system then detects vehicles in the image plane using an active learning-based monocular vision approach. Using the image coordinates of detected vehicles, the system then localizes the vehicles in real-world coordinates using the calculated depth map. The vehicles are tracked both in the image plane, and in real-world coordinates, fusing information from both the monocular and stereo modalities. Vehicles' states are estimated and tracked using Kalman filtering. Quantitative analysis of tracks is provided. The full system takes 46ms to process a single frame.",
"title": ""
}
] |
1840421 | Game User Experience Evaluation | [
{
"docid": "pos:1840421_0",
"text": "s (New York: ACM), pp. 1617 – 20. MASLOW, A.H., 1954,Motivation and personality (New York: Harper). MCDONAGH, D., HEKKERT, P., VAN ERP, J. and GYI, D. (Eds), 2003, Design and Emotion: The Experience of Everyday Things (London: Taylor & Francis). MILLARD, N., HOLE, L. and CROWLE, S., 1999, Smiling through: motivation at the user interface. In Proceedings of the HCI International’99, Volume 2 (pp. 824 – 8) (Mahwah, NJ, London: Lawrence Erlbaum Associates). NORMAN, D., 2004a, Emotional design: Why we love (or hate) everyday things (New York: Basic Books). NORMAN, D., 2004b, Introduction to this special section on beauty, goodness, and usability. Human Computer Interaction, 19, pp. 311 – 18. OVERBEEKE, C.J., DJAJADININGRAT, J.P., HUMMELS, C.C.M. and WENSVEEN, S.A.G., 2002, Beauty in Usability: Forget about ease of use! In Pleasure with products: Beyond usability, W. Green and P. Jordan (Eds), pp. 9 – 18 (London: Taylor & Francis). 96 M. Hassenzahl and N. Tractinsky D ow nl oa de d by [ M as se y U ni ve rs ity L ib ra ry ] at 2 1: 34 2 3 Ju ly 2 01 1 PICARD, R., 1997, Affective computing (Cambridge, MA: MIT Press). PICARD, R. and KLEIN, J., 2002, Computers that recognise and respond to user emotion: theoretical and practical implications. Interacting with Computers, 14, pp. 141 – 69. POSTREL, V., 2002, The substance of style (New York: Harper Collins). SELIGMAN, M.E.P. and CSIKSZENTMIHALYI, M., 2000, Positive Psychology: An Introduction. American Psychologist, 55, pp. 5 – 14. SHELDON, K.M., ELLIOT, A.J., KIM, Y. and KASSER, T., 2001, What is satisfying about satisfying events? Testing 10 candidate psychological needs. Journal of Personality and Social Psychology, 80, pp. 325 – 39. SINGH, S.N. and DALAL, N.P., 1999, Web home pages as advertisements. Communications of the ACM, 42, pp. 91 – 8. SUH, E., DIENER, E. and FUJITA, F., 1996, Events and subjective well-being: Only recent events matter. Journal of Personality and Social Psychology,",
"title": ""
}
] | [
{
"docid": "neg:1840421_0",
"text": "Separation of video clips into foreground and background components is a useful and important technique, making recognition, classification, and scene analysis more efficient. In this paper, we propose a motion-assisted matrix restoration (MAMR) model for foreground-background separation in video clips. In the proposed MAMR model, the backgrounds across frames are modeled by a low-rank matrix, while the foreground objects are modeled by a sparse matrix. To facilitate efficient foreground-background separation, a dense motion field is estimated for each frame, and mapped into a weighting matrix which indicates the likelihood that each pixel belongs to the background. Anchor frames are selected in the dense motion estimation to overcome the difficulty of detecting slowly moving objects and camouflages. In addition, we extend our model to a robust MAMR model against noise for practical applications. Evaluations on challenging datasets demonstrate that our method outperforms many other state-of-the-art methods, and is versatile for a wide range of surveillance videos.",
"title": ""
},
{
"docid": "neg:1840421_1",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "neg:1840421_2",
"text": "Many electronic feedback systems have been proposed for writing support. However, most of these systems only aim at supporting writing to communicate instead of writing to learn, as in the case of literature review writing. Trigger questions are potentially forms of support for writing to learn, but current automatic question generation approaches focus on factual question generation for reading comprehension or vocabulary assessment. This article presents a novel Automatic Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as a form of support for students' learning through writing. We conducted a large-scale case study, including 24 human supervisors and 33 research students, in an Engineering Research Method course and compared questions generated by G-Asks with human generated questions. The results indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five question quality measures) while significantly outperforming Human Peer and Generic Questions in most quality measures after filtering out questions with grammatical and semantic errors. Furthermore, we identified the most frequent question types, derived from the human supervisors’ questions and discussed how the human supervisors generate such questions from the source text. General Terms: Automatic Question Generation, Natural Language Processing, Academic Writing Support",
"title": ""
},
{
"docid": "neg:1840421_3",
"text": "Recently, Real Time Location Systems (RTLS) have been designed to provide location information of positioning target. The kernel of RTLS is localization algorithm, range-base localization algorithm is concerned as high precision. This paper introduces real-time range-based indoor localization algorithms, including Time of Arrival, Time Difference of Arrival, Received Signal Strength Indication, Time of Flight, and Symmetrical Double Sided Two Way Ranging. Evaluation criteria are proposed for assessing these algorithms, namely positioning accuracy, scale, cost, energy efficiency, and security. We also introduce the latest some solution, compare their Strengths and weaknesses. Finally, we give a recommendation about selecting algorithm from the viewpoint of the practical application need.",
"title": ""
},
{
"docid": "neg:1840421_4",
"text": "This paper presents the development of control circuit for single phase inverter using Atmel microcontroller. The attractiveness of this configuration is the elimination of a microcontroller to generate sinusoidal pulse width modulation (SPWM) pulses. The Atmel microcontroller is able to store all the commands to generate the necessary waveforms to control the frequency of the inverter through proper design of switching pulse. In this paper concept of the single phase inverter and it relation with the microcontroller is reviewed first. Subsequently approach and methods and dead time control are discussed. Finally simulation results and experimental results are discussed.",
"title": ""
},
{
"docid": "neg:1840421_5",
"text": "In this work we do an analysis of Bitcoin’s price and volatility. Particularly, we look at Granger-causation relationships among the pairs of time series: Bitcoin price and the S&P 500, Bitcoin price and the VIX, Bitcoin realized volatility and the S&P 500, and Bitcoin realized volatility and the VIX. Additionally, we explored the relationship between Bitcoin weekly price and public enthusiasm for Blockchain, the technology behind Bitcoin, as measured by Google Trends data. we explore the Granger-causality relationships between Bitcoin weekly price and Blockchain Google Trend time series. We conclude that there exists a bidirectional Granger-causality relationship between Bitcoin realized volatility and the VIX at the 5% significance level, that we cannot reject the hypothesis that Bitcoin weekly price do not Granger-causes Blockchain trends and that we cannot reject the hypothesis that Bitcoin realized volatility do not Granger-causes S&P 500.",
"title": ""
},
{
"docid": "neg:1840421_6",
"text": "As computing becomes more pervasive, the nature of applications must change accordingly. In particular, applications must become more flexible in order to respond to highly dynamic computing environments, and more autonomous, to reflect the growing ratio of applications to users and the corresponding decline in the attention a user can devote to each. That is, applications must become more context-aware. To facilitate the programming of such applications, infrastructure is required to gather, manage, and disseminate context information to applications. This paper is concerned with the development of appropriate context modeling concepts for pervasive computing, which can form the basis for such a context management infrastructure. This model overcomes problems associated with previous context models, including their lack of formality and generality, and also tackles issues such as wide variations in information quality, the existence of complex relationships amongst context information and temporal aspects of context.",
"title": ""
},
{
"docid": "neg:1840421_7",
"text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.",
"title": ""
},
{
"docid": "neg:1840421_8",
"text": "We describe a completely automated large scale visual recommendation system for fashion. Our focus is to efficiently harness the availability of large quantities of online fashion images and their rich meta-data. Specifically, we propose two classes of data driven models in the Deterministic Fashion Recommenders (DFR) and Stochastic Fashion Recommenders (SFR) for solving this problem. We analyze relative merits and pitfalls of these algorithms through extensive experimentation on a large-scale data set and baseline them against existing ideas from color science. We also illustrate key fashion insights learned through these experiments and show how they can be employed to design better recommendation systems. The industrial applicability of proposed models is in the context of mobile fashion shopping. Finally, we also outline a large-scale annotated data set of fashion images Fashion-136K) that can be exploited for future research in data driven visual fashion.",
"title": ""
},
{
"docid": "neg:1840421_9",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "neg:1840421_10",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "neg:1840421_11",
"text": "Information-maximization clustering learns a probabilistic classifier in an unsupervised manner so that mutual information between feature vectors and cluster assignments is maximized. A notable advantage of this approach is that it involves only continuous optimization of model parameters, which is substantially simpler than discrete optimization of cluster assignments. However, existing methods still involve nonconvex optimization problems, and therefore finding a good local optimal solution is not straightforward in practice. In this letter, we propose an alternative information-maximization clustering method based on a squared-loss variant of mutual information. This novel approach gives a clustering solution analytically in a computationally efficient way via kernel eigenvalue decomposition. Furthermore, we provide a practical model selection procedure that allows us to objectively optimize tuning parameters included in the kernel function. Through experiments, we demonstrate the usefulness of the proposed approach.",
"title": ""
},
{
"docid": "neg:1840421_12",
"text": "Inductive Power Transfer (IPT) is a practical method for recharging Electric Vehicles (EVs) because is it safe, efficient and convenient. Couplers or Power Pads are the power transmitters and receivers used with such contactless charging systems. Due to improvements in power electronic components, the performance and efficiency of an IPT system is largely determined by the coupling or flux linkage between these pads. Conventional couplers are based on circular pad designs and due to their geometry have fundamentally limited magnetic flux above the pad. This results in poor coupling at any realistic spacing between the ground pad and the vehicle pickup mounted on the chassis. Performance, when added to the high tolerance to misalignment required for a practical EV charging system, necessarily results in circular pads that are large, heavy and expensive. A new pad topology termed a flux pipe is proposed in this paper that overcomes difficulties associated with conventional circular pads. Due to the magnetic structure, the topology has a significantly improved flux path making more efficient and compact IPT charging systems possible.",
"title": ""
},
{
"docid": "neg:1840421_13",
"text": "Owing to the complexity of the photovoltaic system structure and their environment, especially under the partial shadows environment, the output characteristics of photovoltaic arrays are greatly affected. Under the partial shadows environment, power-voltage (P-V) characteristics curve is of multi-peak. This makes that it is a difficult task to track the actual maximum power point. In addition, most programs are not able to get the maximum power point under these conditions. In this paper, we study the P-V curves under both uniform illumination and partial shadows environments, and then design an algorithm to track the maximum power point and select the strategy to deal with the MPPT algorithm by DSP chips and DC-DC converters. It is simple and easy to allow solar panels to maintain the best solar energy utilization resulting in increasing output at all times. Meanwhile, in order to track local peak point and improve the tracking speed, the algorithm proposed DC-DC converters operating feed-forward control scheme. Compared with the conventional controller, this controller costs much less time. This paper focuses mainly on specific processes of the algorithm, and being the follow-up basis for implementation of control strategies.",
"title": ""
},
{
"docid": "neg:1840421_14",
"text": "Emotion keyword spotting approach can detect emotion well for explicit emotional contents while it obviously cannot compare to supervised learning approaches for detecting emotional contents of particular events. In this paper, we target earthquake situations in Japan as the particular events for emotion analysis because the affected people often show their states and emotions towards the situations via social networking sites. Additionally, tracking crowd emotions in the Internet during the earthquakes can help authorities to quickly decide appropriate assistance policies without paying the cost as the traditional public surveys. Our three main contributions in this paper are: a) the appropriate choice of emotions; b) the novel proposal of two classification methods for determining the earthquake related tweets and automatically identifying the emotions in Twitter; c) tracking crowd emotions during different earthquake situations, a completely new application of emotion analysis research. Our main analysis results show that Twitter users show their Fear and Anxiety right after the earthquakes occurred while Calm and Unpleasantness are not showed clearly during the small earthquakes but in the large tremor.",
"title": ""
},
{
"docid": "neg:1840421_15",
"text": "But, as we look to the horizon of a decade hence, we see no silver bullet. There is no single development, in either technology or in management technique, that by itself promises even one orderof-magnitude improvement in productivity, in reliability, in simplicity. In this article, I shall try to show why, by examining both the nature of the software problem and the properties of the bullets proposed.",
"title": ""
},
{
"docid": "neg:1840421_16",
"text": "For many decades correlation and power spectrum have been primary tools for digital signal processing applications in the biomedical area. The information contained in the power spectrum is essentially that of the autocorrelation sequence; which is sufficient for complete statistical descriptions of Gaussian signals of known means. However, there are practical situations where one needs to look beyond autocorrelation of a signal to extract information regarding deviation from Gaussianity and the presence of phase relations. Higher order spectra, also known as polyspectra, are spectral representations of higher order statistics, i.e. moments and cumulants of third order and beyond. HOS (higher order statistics or higher order spectra) can detect deviations from linearity, stationarity or Gaussianity in the signal. Most of the biomedical signals are non-linear, non-stationary and non-Gaussian in nature and therefore it can be more advantageous to analyze them with HOS compared to the use of second-order correlations and power spectra. In this paper we have discussed the application of HOS for different bio-signals. HOS methods of analysis are explained using a typical heart rate variability (HRV) signal and applications to other signals are reviewed.",
"title": ""
},
{
"docid": "neg:1840421_17",
"text": "We introduce a notion of algorithmic stability of learning algorithms—that we term hypothesis stability—that captures stability of the hypothesis output by the learning algorithm in the normed space of functions from which hypotheses are selected. e main result of the paper bounds the generalization error of any learning algorithm in terms of its hypothesis stability. e bounds are based on martingale inequalities in the Banach space to which the hypotheses belong. We apply the general bounds to bound the performance of some learning algorithms based on empirical risk minimization and stochastic gradient descent. Parts of the work were done when Tongliang Liu was a visiting PhD student at Pompeu Fabra University. School of Information Technologies, Faculty Engineering and Information Technologies, University of Sydney, Sydney, Australia, tliang.liu@gmail.com, dacheng.tao@sydney.edu.au Department of Economics and Business, Pompeu Fabra University, Barcelona, Spain, gabor.lugosi@upf.edu ICREA, Pg. Llus Companys 23, 08010 Barcelona, Spain Barcelona Graduate School of Economics AI group, DTIC, Universitat Pompeu Fabra, Barcelona, Spain, gergely.neu@gmail.com 1",
"title": ""
},
{
"docid": "neg:1840421_18",
"text": "A new capacitive pressure sensor with very large dynamic range is introduced. The sensor is based on a new technique for substantially changing the surface area of the electrodes, rather than the inter-electrode spacing as commonly done at the present. The prototype device has demonstrated a change in capacitance of approximately 2500 pF over a pressure range of 10 kPa.",
"title": ""
}
] |
1840422 | Going Spear Phishing: Exploring Embedded Training and Awareness | [
{
"docid": "pos:1840422_0",
"text": "Phishing attacks, in which criminals lure Internet users to websites that impersonate legitimate sites, are occurring with increasing frequency and are causing considerable harm to victims. In this paper we describe the design and evaluation of an embedded training email system that teaches people about phishing during their normal use of email. We conducted lab experiments contrasting the effectiveness of standard security notices about phishing with two embedded training designs we developed. We found that embedded training works better than the current practice of sending security notices. We also derived sound design principles for embedded training systems.",
"title": ""
}
] | [
{
"docid": "neg:1840422_0",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "neg:1840422_1",
"text": "Approximate computing can decrease the design complexity with an increase in performance and power efficiency for error resilient applications. This brief deals with a new design approach for approximation of multipliers. The partial products of the multiplier are altered to introduce varying probability terms. Logic complexity of approximation is varied for the accumulation of altered partial products based on their probability. The proposed approximation is utilized in two variants of 16-bit multipliers. Synthesis results reveal that two proposed multipliers achieve power savings of 72% and 38%, respectively, compared to an exact multiplier. They have better precision when compared to existing approximate multipliers. Mean relative error figures are as low as 7.6% and 0.02% for the proposed approximate multipliers, which are better than the previous works. Performance of the proposed multipliers is evaluated with an image processing application, where one of the proposed models achieves the highest peak signal to noise ratio.",
"title": ""
},
{
"docid": "neg:1840422_2",
"text": "This paper investigates possible improvements in grid voltage stability and transient stability with wind energy converter units using modified P/Q control. The voltage source converter (VSC) in modern variable speed wind turbines is utilized to achieve this enhancement. The findings show that using only available hardware for variable-speed turbines improvements could be obtained in all cases. Moreover, it was found that power system stability improvement is often larger when the control is modified for a given variable speed wind turbine rather than when standard variable speed turbines are used instead of fixed speed turbines. To demonstrate that the suggested modifications can be incorporated in real installations, a real situation is presented where short-term voltage stability is improved as an additional feature of an existing VSC high voltage direct current (HVDC) installation",
"title": ""
},
{
"docid": "neg:1840422_3",
"text": "This paper presents a short-baseline real-time stereo vision system that is capable of the simultaneous and robust estimation of the ego-motion and of the 3D structure and the independent motion of thousands of points of the environment. Kalman filters estimate the position and velocity of world points in 3D Euclidean space. The six degrees of freedom of the ego-motion are obtained by minimizing the projection error of the current and previous clouds of static points. Experimental results with real data in indoor and outdoor environments demonstrate the robustness, accuracy and efficiency of our approach. Since the baseline is as short as 13cm, the device is head-mountable, and can be used by a visually impaired person. Our proposed system can be used to augment the perception of the user in complex dynamic environments.",
"title": ""
},
{
"docid": "neg:1840422_4",
"text": "This paper presents a novel method for discovering causal relations between events encoded in text. In order to determine if two events from the same sentence are in a causal relation or not, we first build a graph representation of the sentence that encodes lexical, syntactic, and semantic information. In a second step, we automatically extract multiple graph patterns (or subgraphs) from such graph representations and sort them according to their relevance in determining the causality between two events from the same sentence. Finally, in order to decide if these events are causal or not, we train a binary classifier based on what graph patterns can be mapped to the graph representation associated with the two events. Our experimental results show that capturing the feature dependencies of causal event relations using a graph representation significantly outperforms an existing method that uses a flat representation of features.",
"title": ""
},
{
"docid": "neg:1840422_5",
"text": "This paper presents a simple end-to-end model for speech recognition, combining a convolutional network based acoustic model and a graph decoding. It is trained to output letters, with transcribed speech, without the need for force alignment of phonemes. We introduce an automatic segmentation criterion for training from sequence annotation without alignment that is on par with CTC [6] while being simpler. We show competitive results in word error rate on the Librispeech corpus [18] with MFCC features, and promising results from raw waveform.",
"title": ""
},
{
"docid": "neg:1840422_6",
"text": "A new ultra-wideband monocycle pulse generator with good performance is designed and demonstrated. The pulse generator circuits employ SRD(step recovery diode), Schottky diode, and simple RC coupling and decoupling circuit, and are completely fabricated on the planar microstrip structure, which have the characteristic of low cost and small size. Through SRD modeling, the accuracy of the simulation is improved, which save the design period greatly. The generated monocycle pulse has the peak-to-peak amplitude 1.3V, pulse width 370ps and pulse repetition rate of 10MHz, whose waveform features are symmetric well and low ringing level. Good agreement between the measured and calculated results is achieved.",
"title": ""
},
{
"docid": "neg:1840422_7",
"text": "The energy landscape theory of protein folding is a statistical description of a protein's potential surface. It assumes that folding occurs through organizing an ensemble of structures rather than through only a few uniquely defined structural intermediates. It suggests that the most realistic model of a protein is a minimally frustrated heteropolymer with a rugged funnel-like landscape biased toward the native structure. This statistical description has been developed using tools from the statistical mechanics of disordered systems, polymers, and phase transitions of finite systems. We review here its analytical background and contrast the phenomena in homopolymers, random heteropolymers, and protein-like heteropolymers that are kinetically and thermodynamically capable of folding. The connection between these statistical concepts and the results of minimalist models used in computer simulations is discussed. The review concludes with a brief discussion of how the theory helps in the interpretation of results from fast folding experiments and in the practical task of protein structure prediction.",
"title": ""
},
{
"docid": "neg:1840422_8",
"text": "We explore the reliability and validity of a self-report measure of procrastination and conscientiousness designed for use with thirdto fifth-grade students. The responses of 120 students are compared with teacher and parent ratings of the student. Confirmatory and exploratory factor analyses were also used to examine the structure of the scale. Procrastination and conscientiousness are highly correlated (inversely); evidence suggests that procrastination and conscientiousness are aspects of the same construct. Procrastination and conscientiousness are correlated with the Physiological Anxiety subscale of the Revised Children’s Manifest Anxiety Scale, and with the Task (Mastery) and Avoidance (Task Aversiveness) subscales of Skaalvik’s (1997) Goal Orientation Scales. Both theoretical implications and implications for interventions are discussed. © 2002 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "neg:1840422_9",
"text": "by Dimitrios Tzionas for the degree of Doctor rerum naturalium Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object’s shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning e↵ectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. To my family Maria, Konstantinos, Glyka. In the loving memory of giagià 'Olga & pappo‘c Giànnhc. (Olga Matoula & Ioannis Matoulas) πste ô yuqò πsper ô qe–r ‚stin· ka» gÄr ô qe»r ÓrganÏn ‚stin Êrgànwn, ka» Â no‹c e⁄doc e d¿n ka» ô a“sjhsic e⁄doc a sjht¿n.",
"title": ""
},
{
"docid": "neg:1840422_10",
"text": "Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications. In this study, we analyse the semantics of unigram pivot paraphrases and use a graph-based sense induction approach to unveil hidden sense distinctions in the paraphrase sets. The comparison of the acquired senses to gold data from the Lexical Substitution shared task (McCarthy and Navigli, 2007) demonstrates that sense distinctions exist in the paraphrase sets and highlights the need for a disambiguation step in applications using this resource.",
"title": ""
},
{
"docid": "neg:1840422_11",
"text": "A transformer provides galvanic isolation and grounding of the photovoltaic (PV) array in a PV-fed grid-connected inverter. Inclusion of the transformer, however, may increase the cost and/or bulk of the system. To overcome this drawback, a single-phase, single-stage [no extra converter for voltage boost or maximum power point tracking (MPPT)], doubly grounded, transformer-less PV interface, based on the buck-boost principle, is presented. The configuration is compact and uses lesser components. Only one (undivided) PV source and one buck-boost inductor are used and shared between the two half cycles, which prevents asymmetrical operation and parameter mismatch problems. Total harmonic distortion and DC component of the current supplied to the grid is low, compared to existing topologies and conform to standards like IEEE 1547. A brief review of the existing, transformer-less, grid-connected inverter topologies is also included. It is demonstrated that, as compared to the split PV source topology, the proposed configuration is more effective in MPPT and array utilization. Design and analysis of the inverter in discontinuous conduction mode is carried out. Simulation and experimental results are presented.",
"title": ""
},
{
"docid": "neg:1840422_12",
"text": "Forest fires play a critical role in landscape transformation, vegetation succession, soil degradation and air quality. Improvements in fire risk estimation are vital to reduce the negative impacts of fire, either by lessen burn severity or intensity through fuel management, or by aiding the natural vegetation recovery using post-fire treatments. This paper presents the methods to generate the input variables and the risk integration developed within the Firemap project (funded under the Spanish Ministry of Science and Technology) to map wildland fire risk for several regions of Spain. After defining the conceptual scheme for fire risk assessment, the paper describes the methods used to generate the risk parameters, and presents",
"title": ""
},
{
"docid": "neg:1840422_13",
"text": "MOTIVATION\nDuring the past decade, the new focus on genomics has highlighted a particular challenge: to integrate the different views of the genome that are provided by various types of experimental data.\n\n\nRESULTS\nThis paper describes a computational framework for integrating and drawing inferences from a collection of genome-wide measurements. Each dataset is represented via a kernel function, which defines generalized similarity relationships between pairs of entities, such as genes or proteins. The kernel representation is both flexible and efficient, and can be applied to many different types of data. Furthermore, kernel functions derived from different types of data can be combined in a straightforward fashion. Recent advances in the theory of kernel methods have provided efficient algorithms to perform such combinations in a way that minimizes a statistical loss function. These methods exploit semidefinite programming techniques to reduce the problem of finding optimizing kernel combinations to a convex optimization problem. Computational experiments performed using yeast genome-wide datasets, including amino acid sequences, hydropathy profiles, gene expression data and known protein-protein interactions, demonstrate the utility of this approach. A statistical learning algorithm trained from all of these data to recognize particular classes of proteins--membrane proteins and ribosomal proteins--performs significantly better than the same algorithm trained on any single type of data.\n\n\nAVAILABILITY\nSupplementary data at http://noble.gs.washington.edu/proj/sdp-svm",
"title": ""
},
{
"docid": "neg:1840422_14",
"text": "Rock mass description and characterisation is a basic task for exploration, mining work-flows and ground-water studies. Rock analysis can be performed using borehole logs that are created using a televiewer. Planar discontinuities in the rock appear as sinusoidal curves in borehole logs. The aim of this project is to develop a fast algorithm to analyse borehole imagery using image processing techniques, to identify and trace the discontinuities, and to perform quantitative analysis on their distribution.",
"title": ""
},
{
"docid": "neg:1840422_15",
"text": "Joseph Goldstein has written in this journal that creation (through invention) and revelation (through discovery) are two different routes to advancement in the biomedical sciences1. In my work as a phytochemist, particularly during the period from the late 1960s to the 1980s, I have been fortunate enough to travel both routes. I graduated from the Beijing Medical University School of Pharmacy in 1955. Since then, I have been involved in research on Chinese herbal medicine in the China Academy of Chinese Medical Sciences (previously known as the Academy of Traditional Chinese Medicine). From 1959 to 1962, I was released from work to participate in a training course in Chinese medicine that was especially designed for professionals with backgrounds in Western medicine. The 2.5-year training guided me to the wonderful treasure to be found in Chinese medicine and toward understanding the beauty in the philosophical thinking that underlies a holistic view of human beings and the universe.",
"title": ""
},
{
"docid": "neg:1840422_16",
"text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.",
"title": ""
},
{
"docid": "neg:1840422_17",
"text": "Dynamic Optimization Problems (DOPs) have been widely studied using Evolutionary Algorithms (EAs). Yet, a clear and rigorous definition of DOPs is lacking in the Evolutionary Dynamic Optimization (EDO) community. In this paper, we propose a unified definition of DOPs based on the idea of multiple-decision-making discussed in the Reinforcement Learning (RL) community. We draw a connection between EDO and RL by arguing that both of them are studying DOPs according to our definition of DOPs. We point out that existing EDO or RL research has been mainly focused on some types of DOPs. A conceptualized benchmark problem, which is aimed at the systematic study of various DOPs, is then developed. Some interesting experimental studies on the benchmark reveal that EDO and RL methods are specialized in certain types of DOPs and more importantly new algorithms for DOPs can be developed by combining the strength of both EDO and RL methods.",
"title": ""
},
{
"docid": "neg:1840422_18",
"text": "This paper is the second part in a series that provides a comprehensive survey of the problems and techniques of tracking maneuvering targets in the absence of the so-called measurement-origin uncertainty. It surveys motion models of ballistic targets used for target tracking. Models for all three phases (i.e., boost, coast, and reentry) of motion are covered.",
"title": ""
}
] |
1840423 | Developing a Knowledge Management Strategy: Reflections from an Action Research Project | [
{
"docid": "pos:1840423_0",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
},
{
"docid": "pos:1840423_1",
"text": "Knowledge is a broad and abstract notion that has defined epistemological debate in western philosophy since the classical Greek era. In the past Richard Watson was the accepting senior editor for this paper. MISQ Review articles survey, conceptualize, and synthesize prior MIS research and set directions for future research. For more details see http://www.misq.org/misreview/announce.html few years, however, there has been a growing interest in treating knowledge as a significant organizational resource. Consistent with the interest in organizational knowledge and knowledge management (KM), IS researchers have begun promoting a class of information systems, referred to as knowledge management systems (KMS). The objective of KMS is to support creation, transfer, and application of knowledge in organizations. Knowledge and knowledge management are complex and multi-faceted concepts. Thus, effective development and implementation of KMS requires a foundation in several rich",
"title": ""
}
] | [
{
"docid": "neg:1840423_0",
"text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.",
"title": ""
},
{
"docid": "neg:1840423_1",
"text": "Guadua angustifolia Kunth was successfully propagated in vitro from axillary buds. Culture initiation, bud sprouting, shoot and plant multiplication, rooting and acclimatization, were evaluated. Best results were obtained using explants from greenhouse-cultivated plants, following a disinfection procedure that comprised the sequential use of an alkaline detergent, a mixture of the fungicide Benomyl and the bactericide Agri-mycin, followed by immersion in sodium hypochlorite (1.5% w/v) for 10 min, and culturing on Murashige and Skoog medium containing 2 ml l−1 of Plant Preservative Mixture®. Highest bud sprouting in original explants was observed when 3 mg l−1 N6-benzylaminopurine (BAP) was incorporated into the culture medium. Production of lateral shoots in in vitro growing plants increased with BAP concentration in culture medium, up to 5 mg l−1, the highest concentration assessed. After six subcultures, clumps of 8–12 axes were obtained, and their division in groups of 3–5 axes allowed multiplication of the plants. Rooting occurred in vitro spontaneously in 100% of the explants that produced lateral shoots. Successful acclimatization of well-rooted clumps of 5–6 axes was achieved in the greenhouse under mist watering in a mixture of soil, sand and rice hulls (1:1:1).",
"title": ""
},
{
"docid": "neg:1840423_2",
"text": "\"The second edition is clearer and adds more examples on how to use STL in a practical environment. Moreover, it is more concerned with performance and tools for its measurement. Both changes are very welcome.\"--Lawrence Rauchwerger, Texas A&M University \"So many algorithms, so little time! The generic algorithms chapter with so many more examples than in the previous edition is delightful! The examples work cumulatively to give a sense of comfortable competence with the algorithms, containers, and iterators used.\"--Max A. Lebow, Software Engineer, Unisys Corporation The STL Tutorial and Reference Guide is highly acclaimed as the most accessible, comprehensive, and practical introduction to the Standard Template Library (STL). Encompassing a set of C++ generic data structures and algorithms, STL provides reusable, interchangeable components adaptable to many different uses without sacrificing efficiency. Written by authors who have been instrumental in the creation and practical application of STL, STL Tutorial and Reference Guide, Second Edition includes a tutorial, a thorough description of each element of the library, numerous sample applications, and a comprehensive reference. You will find in-depth explanations of iterators, generic algorithms, containers, function objects, and much more. Several larger, non-trivial applications demonstrate how to put STL's power and flexibility to work. This book will also show you how to integrate STL with object-oriented programming techniques. In addition, the comprehensive and detailed STL reference guide will be a constant and convenient companion as you learn to work with the library. This second edition is fully updated to reflect all of the changes made to STL for the final ANSI/ISO C++ language standard. It has been expanded with new chapters and appendices. Many new code examples throughout the book illustrate individual concepts and techniques, while larger sample programs demonstrate the use of the STL in real-world C++ software development. An accompanying Web site, including source code and examples referenced in the text, can be found at http://www.cs.rpi.edu/~musser/stl-book/index.html.",
"title": ""
},
{
"docid": "neg:1840423_3",
"text": "BACKGROUND\nAnaemia is associated with poor cancer control, particularly in patients undergoing radiotherapy. We investigated whether anaemia correction with epoetin beta could improve outcome of curative radiotherapy among patients with head and neck cancer.\n\n\nMETHODS\nWe did a multicentre, double-blind, randomised, placebo-controlled trial in 351 patients (haemoglobin <120 g/L in women or <130 g/L in men) with carcinoma of the oral cavity, oropharynx, hypopharynx, or larynx. Patients received curative radiotherapy at 60 Gy for completely (R0) and histologically incomplete (R1) resected disease, or 70 Gy for macroscopically incompletely resected (R2) advanced disease (T3, T4, or nodal involvement) or for primary definitive treatment. All patients were assigned to subcutaneous placebo (n=171) or epoetin beta 300 IU/kg (n=180) three times weekly, from 10-14 days before and continuing throughout radiotherapy. The primary endpoint was locoregional progression-free survival. We assessed also time to locoregional progression and survival. Analysis was by intention to treat.\n\n\nFINDINGS\n148 (82%) patients given epoetin beta achieved haemoglobin concentrations higher than 140 g/L (women) or 150 g/L (men) compared with 26 (15%) given placebo. However, locoregional progression-free survival was poorer with epoetin beta than with placebo (adjusted relative risk 1.62 [95% CI 1.22-2.14]; p=0.0008). For locoregional progression the relative risk was 1.69 (1.16-2.47, p=0.007) and for survival was 1.39 (1.05-1.84, p=0.02).\n\n\nINTERPRETATION\nEpoetin beta corrects anaemia but does not improve cancer control or survival. Disease control might even be impaired. Patients receiving curative cancer treatment and given erythropoietin should be studied in carefully controlled trials.",
"title": ""
},
{
"docid": "neg:1840423_4",
"text": "BACKGROUND\nCaring traditionally has been at the center of nursing. Effectively measuring the process of nurse caring is vital in nursing research. A short, less burdensome dimensional instrument for patients' use is needed for this purpose.\n\n\nOBJECTIVES\nTo derive and validate a shorter Caring Behaviors Inventory (CBI) within the context of the 42-item CBI.\n\n\nMETHODS\nThe responses to the 42-item CBI from 362 hospitalized patients were used to develop a short form using factor analysis. A test-retest reliability study was conducted by administering the shortened CBI to new samples of patients (n = 64) and nurses (n = 42).\n\n\nRESULTS\nFactor analysis yielded a 24-item short form (CBI-24) that (a) covers the four major dimensions assessed by the 42-item CBI, (b) has internal consistency (alpha =.96) and convergent validity (r =.62) similar to the 42-item CBI, (c) reproduces at least 97% of the variance of the 42 items in patients and nurses, (d) provides statistical conclusions similar to the 42-item CBI on scoring for caring behaviors by patients and nurses, (e) has similar sensitivity in detecting between-patient difference in perceptions, (f) obtains good test-retest reliability (r = .88 for patients and r=.82 for nurses), and (g) confirms high internal consistency (alpha >.95) as a stand-alone instrument administered to the new samples.\n\n\nCONCLUSION\nCBI-24 appears to be equivalent to the 42-item CBI in psychometric properties, validity, reliability, and scoring for caring behaviors among patients and nurses. These results recommend the use of CBI-24 to reduce response burden and research costs.",
"title": ""
},
{
"docid": "neg:1840423_5",
"text": "The main aim of the current paper is to develop a high-order numerical scheme to solve the space–time tempered fractional diffusion-wave equation. The convergence order of the proposed method is O(τ2 + h4). Also, we prove the unconditional stability and convergence of the developed method. The numerical results show the efficiency of the provided numerical scheme. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840423_6",
"text": "While it is possible to understand utopias and dystopias as particular kinds of sociopolitical systems, in this text we argue that utopias and dystopias can also be understood as particular kinds of information systems in which data is received, stored, generated, processed, and transmitted by the minds of human beings that constitute the system’s ‘nodes’ and which are connected according to specific network topologies. We begin by formulating a model of cybernetic information-processing properties that characterize utopias and dystopias. It is then shown that the growing use of neuroprosthetic technologies for human enhancement is expected to radically reshape the ways in which human minds access, manipulate, and share information with one another; for example, such technologies may give rise to posthuman ‘neuropolities’ in which human minds can interact with their environment using new sensorimotor capacities, dwell within shared virtual cyberworlds, and link with one another to form new kinds of social organizations, including hive minds that utilize communal memory and decision-making. Drawing on our model, we argue that the dynamics of such neuropolities will allow (or perhaps even impel) the creation of new kinds of utopias and dystopias that were previously impossible to realize. Finally, we suggest that it is important that humanity begin thoughtfully exploring the ethical, social, and political implications of realizing such technologically enabled societies by studying neuropolities in a place where they have already been ‘pre-engineered’ and provisionally exist: in works of audiovisual science fiction such as films, television series, and role-playing games",
"title": ""
},
{
"docid": "neg:1840423_7",
"text": "Despite significant accuracy improvement in convolutional neural networks (CNN) based object detectors, they often require prohibitive runtimes to process an image for real-time applications. State-of-the-art models often use very deep networks with a large number of floating point operations. Efforts such as model compression learn compact models with fewer number of parameters, but with much reduced accuracy. In this work, we propose a new framework to learn compact and fast object detection networks with improved accuracy using knowledge distillation [20] and hint learning [34]. Although knowledge distillation has demonstrated excellent improvements for simpler classification setups, the complexity of detection poses new challenges in the form of regression, region proposals and less voluminous labels. We address this through several innovations such as a weighted cross-entropy loss to address class imbalance, a teacher bounded loss to handle the regression component and adaptation layers to better learn from intermediate teacher distributions. We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. Our results show consistent improvement in accuracy-speed trade-offs for modern multi-class detection models.",
"title": ""
},
{
"docid": "neg:1840423_8",
"text": "Searle (1989) posits a set of adequacy criteria for any account of the meaning and use of performative verbs, such as order or promise. Central among them are: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-verifying; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning. He then argues that the fundamental problem with assertoric accounts of performatives is that they fail (b), and hence (a), because being committed to having an intention does not guarantee having that intention. Relying on a uniform meaning for verbs on their reportative and performative uses, we propose an assertoric analysis of performative utterances that does not require an actual intention for deriving (b), and hence can meet (a) and (c). Explicit performative utterances are those whose illocutionary force is made explicit by the verbs appearing in them (Austin 1962): (1) I (hereby) promise you to be there at five. (is a promise) (2) I (hereby) order you to be there at five. (is an order) (3) You are (hereby) ordered to report to jury duty. (is an order) (1)–(3) look and behave syntactically like declarative sentences in every way. Hence there is no grammatical basis for the once popular claim that I promise/ order spells out a ‘performative prefix’ that is silent in all other declaratives. Such an analysis, in any case, leaves unanswered the question of how illocutionary force is related to compositional meaning and, consequently, does not explain how the first person and present tense are special, so that first-person present tense forms can spell out performative prefixes, while others cannot. Minimal variations in person or tense remove the ‘performative effect’: (4) I promised you to be there at five. (is not a promise) (5) He promises to be there at five. (is not a promise) An attractive idea is that utterances of sentences like those in (1)–(3) are asser∗ The names of the authors appear in alphabetical order. 150 Condoravdi & Lauer tions, just like utterances of other declaratives, whose truth is somehow guaranteed. In one form or another, this basic strategy has been pursued by a large number of authors ever since Austin (1962) (Lemmon 1962; Hedenius 1963; Bach & Harnish 1979; Ginet 1979; Bierwisch 1980; Leech 1983; among others). One type of account attributes self-verification to meaning proper. Another type, most prominently exemplified by Bach & Harnish (1979), tries to derive the performative effect by means of an implicature-like inference that the hearer may draw based on the utterance of the explicit performative. Searle’s (1989) Challenge Searle (1989) mounts an argument against analyses of explicit performative utterances as self-verifying assertions. He takes the argument to show that an assertoric account is impossible. Instead, we take it to pose a challenge that can be met, provided one supplies the right semantics for the verbs involved. Searle’s argument is based on the following desiderata he posits for any theory of explicit performatives: (a) performative utterances are performances of the act named by the performative verb; (b) performative utterances are self-guaranteeing; (c) performative utterances achieve (a) and (b) in virtue of their literal meaning, which, in turn, ought to be based on a uniform lexical meaning of the verb across performative and reportative uses. According to Searle’s speech act theory, making a promise requires that the promiser intend to do so, and similarly for other performative verbs (the sincerity condition). It follows that no assertoric account can meet (a-c): An assertion cannot ensure that the speaker has the necessary intention. “Such an assertion does indeed commit the speaker to the existence of the intention, but the commitment to having the intention doesn’t guarantee the actual presence of the intention.” Searle (1989: 546) Hence assertoric accounts must fail on (b), and, a forteriori, on (a) and (c).1 Although Searle’s argument is valid, his premise that for truth to be guaranteed the speaker must have a particular intention is questionable. In the following, we give an assertoric account that delivers on (a-c). We aim for an 1 It should be immediately clear that inference-based accounts cannot meet (a-c) above. If the occurrence of the performative effect depends on the hearer drawing an inference, then such sentences could not be self-verifying, for the hearer may well fail to draw the inference. Performative Verbs and Performative Acts 151 account on which the assertion of the explicit performative is the performance of the act named by the performative verb. No hearer inferences are necessary. 1 Reportative and Performative Uses What is the meaning of the word order, then, so that it can have both reportative uses – as in (6) – and performative uses – as in (7)? (6) A ordered B to sign the report. (7) [A to B] I order you to sign the report now. The general strategy in this paper will be to ask what the truth conditions of reportative uses of performative verbs are, and then see what happens if these verbs are put in the first person singular present tense. The reason to start with the reportative uses is that speakers have intuitions about their truth conditions. This is not true for performative uses, because these are always true when uttered, obscuring the truth-conditional content of the declarative sentence.2 An assertion of (6) takes for granted that A presumed to have authority over B and implies that there was a communicative act from A to B. But what kind of communicative act? (7) or, in the right context, (8a-c) would suffice. (8) a. Sign the report now! b. You must sign the report now! c. I want you to sign the report now! What do these sentences have in common? We claim it is this: In the right context they commit A to a particular kind of preference for B signing the report immediately. If B accepts the utterance, he takes on a commitment to act as though he, too, prefers signing the report. If the report is co-present with A and B, he will sign it, if the report is in his office, he will leave to go there immediately, and so on. To comply with an order to p is to act as though one prefers p. One need not actually prefer it, but one has to act as if one did. The authority mentioned above amounts to this acceptance being socially or institutionally mandated. Of course, B has the option to refuse to take on this commitment, in either of two ways: (i) he can deny A’s authority, (ii) while accepting the authority, he can refuse to abide by it, thereby violating the institutional or social mandate. Crucially, in either case, (6) will still be true, as witnessed by the felicity of: 2 Szabolcsi (1982), in one of the earliest proposals for a compositional semantics of performative utterances, already pointed out the importance of reportative uses. 152 Condoravdi & Lauer (9) a. (6), but B refused to do it. b. (6), but B questioned his authority. Not even uptake by the addressee is necessary for order to be appropriate, as seen in (10) and the naturally occurring (11):3 (10) (6), but B did not hear him. (11) He ordered Kornilov to desist but either the message failed to reach the general or he ignored it.4 What is necessary is that the speaker expected uptake to happen, arguably a minimal requirement for an act to count as a communicative event. To sum up, all that is needed for (6) to be true and appropriate is that (i) there is a communicative act from A to B which commits A to a preference for B signing the report immediately and (ii) A presumes to have authority over B. The performative effect arises precisely when the utterance itself is a witness for the existential claim in (i). There are two main ingredients in the meaning of order informally outlined above: the notion of a preference, in particular a special kind of preference that guides action, and the notion of a commitment. The next two sections lay some conceptual groundwork before we spell out our analysis in section 4. 2 Representing Preferences To represent preferences that guide action, we need a way to represent preferences of different strength. Kratzer’s (1981) theory of modality is not suitable for this purpose. Suppose, for instance, that Sven desires to finish his paper and that he also wants to lie around all day, doing nothing. Modeling his preferences in the style of Kratzer, the propositions expressed by (12) and (13) would have to be part of Sven’s bouletic ordering source assigned to the actual world: (12) Sven finishes his paper. (13) Sven lies around all day, doing nothing. But then, Sven should be equally happy if he does nothing as he is if he finishes his paper. We want to be able to explain why, given his knowledge that (12) and (13) are incompatible, he works on his paper. Intuitively, it is because the preference expressed by (12) is more important than that expressed by (13). 3 We owe this observation to Lauri Karttunen. 4 https://tspace.library.utoronto.ca/citd/RussianHeritage/12.NR/NR.12.html Performative Verbs and Performative Acts 153 Preference Structures Definition 1. A preference structure relative to an information state W is a pair 〈P,≤〉, where P⊆℘(W ) and ≤ is a (weak) partial order on P. We can now define a notion of consistency that is weaker than requiring that all propositions in the preference structure be compatible: Definition 2. A preference structure 〈P,≤〉 is consistent iff for any p,q ∈ P such that p∩q = / 0, either p < q or q < p. Since preference structures are defined relative to an information state W , consistency will require not only logically but also contextually incompatible propositions to be strictly ranked. For example, if W is Sven’s doxastic state, and he knows that (12) and (13) are incompatible, for a bouletic preference structure of his to be consistent it must strictly rank the two propositions. In general, bouletic preference",
"title": ""
},
{
"docid": "neg:1840423_9",
"text": "Ego level is a broad construct that summarizes individual differences in personality development 1 . We examine ego level as it is represented in natural language, using a composite sample of four datasets comprising nearly 44,000 responses. We find support for a developmental sequence in the structure of correlations between ego levels, in analyses of Linguistic Inquiry and Word Count (LIWC) categories 2 and in an examination of the individual words that are characteristic of each level. The LIWC analyses reveal increasing complexity and, to some extent, increasing breadth of perspective with higher levels of development. The characteristic language of each ego level suggests, for example, a shift from consummatory to appetitive desires at the lowest stages, a dawning of doubt at the Self-aware stage, the centrality of achievement motivation at the Conscientious stage, an increase in mutuality and intellectual growth at the Individualistic stage and some renegotiation of life goals and reflection on identity at the highest levels of development. Continuing empirical analysis of ego level and language will provide a deeper understanding of ego development, its relationship with other models of personality and individual differences, and its utility in characterizing people, texts and the cultural contexts that produce them. A linguistic analysis of nearly 44,000 responses to the Washington University Sentence Completion Test elucidates the construct of ego development (personality development through adulthood) and identifies unique linguistic markers of each level of development.",
"title": ""
},
{
"docid": "neg:1840423_10",
"text": "Financial crises have occurred for many centuries. They are often preceded by a credit boom and a rise in real estate and other asset prices, as in the current crisis. They are also often associated with severe disruption in the real economy. This paper surveys the theoretical and empirical literature on crises. The first explanation of banking crises is that they are a panic. The second is that they are part of the business cycle. Modeling crises as a global game allows the two to be unified. With all the liquidity problems in interbank markets that have occurred during the current crisis, there is a growing literature on this topic. Perhaps the most serious market failure associated with crises is contagion, and there are many papers on this important topic. The relationship between asset price bubbles, particularly in real estate, and crises is discussed at length. Disciplines Economic Theory | Finance | Finance and Financial Management This journal article is available at ScholarlyCommons: http://repository.upenn.edu/fnce_papers/403 Financial Crises: Theory and Evidence Franklin Allen University of Pennsylvania Ana Babus Cambridge University Elena Carletti European University Institute",
"title": ""
},
{
"docid": "neg:1840423_11",
"text": "Hydro Muscles are linear actuators resembling ordinary biological muscles in terms of active dynamic output, passive material properties and appearance. The passive and dynamic characteristics of the latex based Hydro Muscle are addressed. The control tests of modular muscles are presented together with a muscle model relating sensed quantities with net force. Hydro Muscles are discussed in the context of conventional actuators. The hypothesis that Hydro Muscles have greater efficiency than McKibben Muscles is experimentally verified. Hydro Muscle peak efficiency with (without) back flow consideration was 88% (27%). Possible uses of Hydro Muscles are illustrated by relevant robotics projects at WPI. It is proposed that Hydro Muscles can also be an excellent educational tool for moderate-budget robotics classrooms and labs; the muscles are inexpensive (in the order of standard latex tubes of comparable size), made of off-the-shelf elements in less than 10 minutes, easily customizable, lightweight, biologically inspired, efficient, compliant soft linear actuators that are adept for power-augmentation. Moreover, a single source can actuate many muscles by utilizing control of flow and/or pressure. Still further, these muscles can utilize ordinary tap water and successfully operate within a safe range of pressures not overly exceeding standard water household pressure of about 0.59 MPa (85 psi).",
"title": ""
},
{
"docid": "neg:1840423_12",
"text": "Over the last decade, the endocannabinoid system has emerged as a pivotal mediator of acute and chronic liver injury, with the description of the role of CB1 and CB2 receptors and their endogenous lipidic ligands in various aspects of liver pathophysiology. A large number of studies have demonstrated that CB1 receptor antagonists represent an important therapeutic target, owing to beneficial effects on lipid metabolism and in light of its antifibrogenic properties. Unfortunately, the brain-penetrant CB1 antagonist rimonabant, initially approved for the management of overweight and related cardiometabolic risks, was withdrawn because of an alarming rate of mood adverse effects. However, the efficacy of peripherally-restricted CB1 antagonists with limited brain penetrance has now been validated in preclinical models of NAFLD, and beneficial effects on fibrosis and its complications are anticipated. CB2 receptor is currently considered as a promising anti-inflammatory and antifibrogenic target, although clinical development of CB2 agonists is still awaited. In this review, we highlight the latest advances on the impact of the endocannabinoid system on the key steps of chronic liver disease progression and discuss the therapeutic potential of molecules targeting cannabinoid receptors.",
"title": ""
},
{
"docid": "neg:1840423_13",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "neg:1840423_14",
"text": "In this paper, we propose an enhanced method for detecting light blobs (LBs) for intelligent headlight control (IHC). The main function of the IHC system is to automatically convert high-beam headlights to low beam when vehicles are found in the vicinity. Thus, to implement the IHC, it is necessary to detect preceding or oncoming vehicles. Generally, this process of detecting vehicles is done by detecting LBs in the images. Previous works regarding LB detection can largely be categorized into two approaches by the image type they use: low-exposure (LE) images or autoexposure (AE) images. While they each have their own strengths and weaknesses, the proposed method combines them by integrating the use of the partial region of the AE image confined by the lane detection information and the LE image. Consequently, the proposed method detects headlights at various distances and taillights at close distances using LE images while handling taillights at distant locations by exploiting the confined AE images. This approach enhances the performance of detecting the distant LBs while maintaining low false detections.",
"title": ""
},
{
"docid": "neg:1840423_15",
"text": "Guided by the aim to construct light fields with spin-like orbital angular momentum (OAM), that is light fields with a uniform and intrinsic OAM density, we investigate the OAM of arrays of optical vortices with rectangular symmetry. We find that the OAM per unit cell depends on the choice of unit cell and can even change sign when the unit cell is translated. This is the case even if the OAM in each unit cell is intrinsic, that is independent of the choice of measurement axis. We show that spin-like OAM can be found only if the OAM per unit cell vanishes. Our results are applicable to the z component of the angular momentum of any x- and y-periodic momentum distribution in the xy plane, and can also be applied other periodic light beams, arrays of rotating massive objects and periodic motion of liquids.",
"title": ""
},
{
"docid": "neg:1840423_16",
"text": "With adolescents’ frequent use of social media, electronic bullying has emerged as a powerful platform for peer victimization. The present two studies explore how adolescents perceive electronic vs. traditional bullying in emotional impact and strategic responses. In Study 1, 97 adolescents (mean age = 15) viewed hypothetical peer victimization scenarios, in parallel electronic and traditional forms, with female characters experiencing indirect relational aggression and direct verbal aggression. In Study 2, 47 adolescents (mean age = 14) viewed the direct verbal aggression scenario from Study 1, and a new scenario, involving male characters in the context of direct verbal aggression. Participants were asked to imagine themselves as the victim in all scenarios and then rate their emotional reactions, strategic responses, and goals for the outcome. Adolescents reported significant negative emotions and disruptions in typical daily activities as the victim across divergent bullying scenarios. In both studies few differences emerged when comparing electronic to traditional bullying, suggesting that online and off-line bullying are subtypes of peer victimization. There were expected differences in strategic responses that fit the medium of the bullying. Results also suggested that embarrassment is a common and highly relevant negative experience in both indirect relational and direct verbal aggression among",
"title": ""
},
{
"docid": "neg:1840423_17",
"text": "This paper describes the approach that was developed for SemEval 2018 Task 2 (Multilingual Emoji Prediction) by the DUTH Team. First, we employed a combination of preprocessing techniques to reduce the noise of tweets and produce a number of features. Then, we built several N-grams, to represent the combination of word and emojis. Finally, we trained our system with a tuned LinearSVC classifier. Our approach in the leaderboard ranked 18th amongst 48 teams.",
"title": ""
},
{
"docid": "neg:1840423_18",
"text": "Compressive sensing (CS) is an emerging approach for the acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1-D signals and 2-D images, many important applications involve multidimensional signals; the construction of sparsifying bases and measurement systems for such signals is complicated by their higher dimensionality. In this paper, we propose the use of Kronecker product matrices in CS for two purposes. First, such matrices can act as sparsifying bases that jointly model the structure present in all of the signal dimensions. Second, such matrices can represent the measurement protocols used in distributed settings. Our formulation enables the derivation of analytical bounds for the sparse approximation of multidimensional signals and CS recovery performance, as well as a means of evaluating novel distributed measurement schemes.",
"title": ""
},
{
"docid": "neg:1840423_19",
"text": "A novel macro model approach for modeling ESD MOS snapback is introduced. The macro model consists of standard components only. It includes a MOS transistor modeled by BSIM3v3, a bipolar transistor modeled by VBIC, and a resistor for substrate resistance. No external current source, which is essential in most publicly reported macro models, is included since both BSIM3vs and VBIC have formulations built in to model the relevant effects. The simplicity of the presented macro model makes behavior languages, such as Verilog-A, and special ESD equations not necessary in model implementation. This offers advantages of high simulation speed, wider availability, and less convergence issues. Measurement and simulation of the new approach indicates that good silicon correlation can be achieved.",
"title": ""
}
] |
1840424 | High-Spectral-Efficiency Optical Modulation Formats | [
{
"docid": "pos:1840424_0",
"text": "Shannon’s determination of the capacity of the linear Gaussian channel has posed a magnificent challenge to succeeding generations of researchers. This paper surveys how this challenge has been met during the past half century. Orthogonal minimumbandwidth modulation techniques and channel capacity are discussed. Binary coding techniques for low-signal-to-noise ratio (SNR) channels and nonbinary coding techniques for high-SNR channels are reviewed. Recent developments, which now allow capacity to be approached on any linear Gaussian channel, are surveyed. These new capacity-approaching techniques include turbo coding and decoding, multilevel coding, and combined coding/precoding for intersymbol-interference channels.",
"title": ""
}
] | [
{
"docid": "neg:1840424_0",
"text": "A new concept of building the controller of a thyristor based three-phase dual converter is presented in this paper. The controller is implemented using mixed mode digital-analog circuitry to achieve optimized performance. The realtime six state pulse patterns needed for the converter are generated by a specially designed ROM based circuit synchronized to the power frequency by a phase-locked-loop. The phase angle and other necessary commands for the converter are managed by an AT89C51 microcontroller. The proposed architecture offers 128-steps in the phase angle control, a resolution sufficient for most converter applications. Because of the hybrid nature of the implementation, the controller can change phase angles online smoothly. The computation burden on the microcontroller is nominal and hence it can easily undertake the tasks of monitoring diagnostic data like overload, loss of excitation and phase sequence. Thus a full fledged system is realizable with only one microcontroller chip, making the control system economic, reliable and efficient.",
"title": ""
},
{
"docid": "neg:1840424_1",
"text": "We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be adversarily mistyped. Our main contribution is the first formal treatment of mistyping in this setting. Ensuring security in presence of mistyping is subtle. We show mistypingrelated limitations of previous KE definitions and constructions (of Boyen et al. [7, 6, 10] and Kolesnikov and Rackoff [16]). We concentrate on the practical two-factor authenticated KE setting where servers exchange keys with clients, who use short passwords (memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk [15] and Kolesnikov-Rackoff [16]. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.",
"title": ""
},
{
"docid": "neg:1840424_2",
"text": "Chronotherapeutics aim at treating illnesses according to the endogenous biologic rhythms, which moderate xenobiotic metabolism and cellular drug response. The molecular clocks present in individual cells involve approximately fifteen clock genes interconnected in regulatory feedback loops. They are coordinated by the suprachiasmatic nuclei, a hypothalamic pacemaker, which also adjusts the circadian rhythms to environmental cycles. As a result, many mechanisms of diseases and drug effects are controlled by the circadian timing system. Thus, the tolerability of nearly 500 medications varies by up to fivefold according to circadian scheduling, both in experimental models and/or patients. Moreover, treatment itself disrupted, maintained, or improved the circadian timing system as a function of drug timing. Improved patient outcomes on circadian-based treatments (chronotherapy) have been demonstrated in randomized clinical trials, especially for cancer and inflammatory diseases. However, recent technological advances have highlighted large interpatient differences in circadian functions resulting in significant variability in chronotherapy response. Such findings advocate for the advancement of personalized chronotherapeutics through interdisciplinary systems approaches. Thus, the combination of mathematical, statistical, technological, experimental, and clinical expertise is now shaping the development of dedicated devices and diagnostic and delivery algorithms enabling treatment individualization. In particular, multiscale systems chronopharmacology approaches currently combine mathematical modeling based on cellular and whole-body physiology to preclinical and clinical investigations toward the design of patient-tailored chronotherapies. We review recent systems research works aiming to the individualization of disease treatment, with emphasis on both cancer management and circadian timing system-resetting strategies for improving chronic disease control and patient outcomes.",
"title": ""
},
{
"docid": "neg:1840424_3",
"text": "This paper tackles the problem of relative pose estimation between two monocular camera images in textureless scenes. Due to a lack of point matches, point-based approaches such as the 5-point algorithm often fail when used in these scenarios. Therefore we investigate relative pose estimation from line observations. We propose a new approach in which the relative pose estimation from lines is extended by a 3D line direction estimation step. The estimated line directions serve to improve the robustness and the efficiency of all processing phases: they enable us to guide the matching of line features and allow an efficient calculation of the relative pose. First, we describe in detail the novel 3D line direction estimation from a single image by clustering of parallel lines in the world. Secondly, we propose an innovative guided matching in which only clusters of lines with corresponding 3D line directions are considered. Thirdly, we introduce the new relative pose estimation based on 3D line directions. Finally, we combine all steps to a visual odometry system. We evaluate the different steps on synthetic and real sequences and demonstrate that in the targeted scenarios we outperform the state-of-the-art in both accuracy and computation time.",
"title": ""
},
{
"docid": "neg:1840424_4",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "neg:1840424_5",
"text": "We revisit a pioneer unsupervised learning technique called archetypal analysis, [5] which is related to successful data analysis methods such as sparse coding [18] and non-negative matrix factorization [19]. Since it was proposed, archetypal analysis did not gain a lot of popularity even though it produces more interpretable models than other alternatives. Because no efficient implementation has ever been made publicly available, its application to important scientific problems may have been severely limited. Our goal is to bring back into favour archetypal analysis. We propose a fast optimization scheme using an active-set strategy, and provide an efficient open-source implementation interfaced with Matlab, R, and Python. Then, we demonstrate the usefulness of archetypal analysis for computer vision tasks, such as codebook learning, signal classification, and large image collection visualization.",
"title": ""
},
{
"docid": "neg:1840424_6",
"text": " Queue stability (Chapter 2.1) Scheduling for stability, capacity regions (Chapter 2.3) Linear programs (Chapter 2.3, Chapter 3) Energy optimality (Chapter 3.2) Opportunistic scheduling (Chapter 2.3, Chapter 3, Chapter 4.6) Lyapunov drift and optimization (Chapter 4.1.0-4.1.2, 4.2, 4.3) Inequality constraints and virtual queues (Chapter 4.4) Drift-plus-penalty algorithm (Chapter 4.5) Performance and delay tradeoffs (Chapter 3.2, 4.5) Backpressure routing (Ex. 4.16, Chapter 5.2, 5.3)",
"title": ""
},
{
"docid": "neg:1840424_7",
"text": "Endowing machines with sensing capabilities similar to those of humans is a prevalent quest in engineering and computer science. In the pursuit of making computers sense their surroundings, a huge effort has been conducted to allow machines and computers to acquire, process, analyze and understand their environment in a human-like way. Focusing on the sense of hearing, the ability of computers to sense their acoustic environment as humans do goes by the name of machine hearing. To achieve this ambitious aim, the representation of the audio signal is of paramount importance. In this paper, we present an up-to-date review of the most relevant audio feature extraction techniques developed to analyze the most usual audio signals: speech, music and environmental sounds. Besides revisiting classic approaches for completeness, we include the latest advances in the field based on new domains of analysis together with novel bio-inspired proposals. These approaches are described following a taxonomy that organizes them according to their physical or perceptual basis, being subsequently divided depending on the domain of computation (time, frequency, wavelet, image-based, cepstral, or other domains). The description of the approaches is accompanied with recent examples of their application to machine hearing related problems.",
"title": ""
},
{
"docid": "neg:1840424_8",
"text": "Designing reliable user authentication on mobile phones is becoming an increasingly important task to protect users' private information and data. Since biometric approaches can provide many advantages over the traditional authentication methods, they have become a significant topic for both academia and industry. The major goal of biometric user authentication is to authenticate legitimate users and identify impostors based on physiological and behavioral characteristics. In this paper, we survey the development of existing biometric authentication techniques on mobile phones, particularly on touch-enabled devices, with reference to 11 biometric approaches (five physiological and six behavioral). We present a taxonomy of existing efforts regarding biometric authentication on mobile phones and analyze their feasibility of deployment on touch-enabled mobile phones. In addition, we systematically characterize a generic biometric authentication system with eight potential attack points and survey practical attacks and potential countermeasures on mobile phones. Moreover, we propose a framework for establishing a reliable authentication mechanism through implementing a multimodal biometric user authentication in an appropriate way. Experimental results are presented to validate this framework using touch dynamics, and the results show that multimodal biometrics can be deployed on touch-enabled phones to significantly reduce the false rates of a single biometric system. Finally, we identify challenges and open problems in this area and suggest that touch dynamics will become a mainstream aspect in designing future user authentication on mobile phones.",
"title": ""
},
{
"docid": "neg:1840424_9",
"text": "This paper presents a new iris database that contains images with noise. This is in contrast with the existing databases, that are noise free. UBIRIS is a tool for the development of robust iris recognition algorithms for biometric proposes. We present a detailed description of the many characteristics of UBIRIS and a comparison of several image segmentation approaches used in the current iris segmentation methods where it is evident their small tolerance to noisy images.",
"title": ""
},
{
"docid": "neg:1840424_10",
"text": "This paper presents the design and development of a microcontroller based heart rate monitor using fingertip sensor. The device uses the optical technology to detect the flow of blood through the finger and offers the advantage of portability over tape-based recording systems. The important feature of this research is the use of Discrete Fourier Transforms to analyse the ECG signal in order to measure the heart rate. Evaluation of the device on real signals shows accuracy in heart rate estimation, even under intense physical activity. The performance of HRM device was compared with ECG signal represented on an oscilloscope and manual pulse measurement of heartbeat, giving excellent results. Our proposed Heart Rate Measuring (HRM) device is economical and user friendly.",
"title": ""
},
{
"docid": "neg:1840424_11",
"text": "Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the “Chess-board Extraction by Subtraction and Summation” (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chessboard pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects.",
"title": ""
},
{
"docid": "neg:1840424_12",
"text": "Social networks are currently gaining increasing impact especially in the light of the ongoing growth of web-based services like facebook.com. A central challenge for the social network analysis is the identification of key persons within a social network. In this context, the article aims at presenting the current state of research on centrality measures for social networks. In view of highly variable findings about the quality of various centrality measures, we also illustrate the tremendous importance of a reflected utilization of existing centrality measures. For this purpose, the paper analyzes five common centrality measures on the basis of three simple requirements for the behavior of centrality measures.",
"title": ""
},
{
"docid": "neg:1840424_13",
"text": "This paper presents SemFrame, a system that induces frame semantic verb classes from WordNet and LDOCE. Semantic frames are thought to have significant potential in resolving the paraphrase problem challenging many languagebased applications. When compared to the handcrafted FrameNet, SemFrame achieves its best recall-precision balance with 83.2% recall (based on SemFrame's coverage of FrameNet frames) and 73.8% precision (based on SemFrame verbs’ semantic relatedness to frame-evoking verbs). The next best performing semantic verb classes achieve 56.9% recall and 55.0% precision.",
"title": ""
},
{
"docid": "neg:1840424_14",
"text": "The increase in electronically mediated self-servic e technologies in the banking industry has impacted on the way banks service consumers. Despit e a large body of research on electronic banking channels, no study has been undertaken to e xplor the fit between electronic banking channels and banking tasks. Nor has there been rese a ch into how the ‘task-channel fit’ and other factors impact on consumers’ intention to use elect ronic banking channels. This paper proposes a theoretical model addressing these gaps. An explora tory study was first conducted, investigating industry experts’ perceptions towards the concept o f ‘task-channel fit’ and its relationship to other electronic banking channel variables. The findings demonstrated that the concept was perceived as being highly relevant by bank managers. A resear ch model was then developed drawing on the existing literature. To evaluate the research mode l quantitatively, a survey will be developed and validated, administered to a sample of consumers, a nd the resulting data used to test both measurement and structural aspects of the research model.",
"title": ""
},
{
"docid": "neg:1840424_15",
"text": "The ethical nature of transformational leadership has been hotly debated. This debate is demonstrated in the range of descriptors that have been used to label transformational leaders including narcissistic, manipulative, and self-centred, but also ethical, just and effective. Therefore, the purpose of the present research was to address this issue directly by assessing the statistical relationship between perceived leader integrity and transformational leadership using the Perceived Leader Integrity Scale (PLIS) and the Multi-Factor Leadership Questionnaire (MLQ). In a national sample of 1354 managers a moderate to strong positive relationship was found between perceived integrity and the demonstration of transformational leadership behaviours. A similar relationship was found between perceived integrity and developmental exchange leadership. A systematic leniency bias was identified when respondents rated subordinates vis-à-vis peer ratings. In support of previous findings, perceived integrity was also found to correlate positively with leader and organisational effectiveness measures.",
"title": ""
},
{
"docid": "neg:1840424_16",
"text": "Following the developments of wireless and mobile communication technologies, mobile-commerce (M-commerce) has become more and more popular. However, most of the existing M-commerce protocols do not consider the user anonymity during transactions. This means that it is possible to trace the identity of a payer from a M-commerce transaction. Luo et al. in 2014 proposed an NFC-based anonymous mobile payment protocol. It used an NFC-enabled smartphone and combined a built-in secure element (SE) as a trusted execution environment to build an anonymous mobile payment service. But their scheme has several problems and cannot be functional in practice. In this paper, we introduce a new NFC-based anonymous mobile payment protocol. Our scheme has the following features:(1) Anonymity. It prevents the disclosure of user's identity by using virtual identities instead of real identity during the transmission. (2) Efficiency. Confidentiality is achieved by symmetric key cryptography instead of public key cryptography so as to increase the performance. (3) Convenience. The protocol is based on NFC and is EMV compatible. (4) Security. All the transaction is either encrypted or signed by the sender so the confidentiality and authenticity are preserved.",
"title": ""
},
{
"docid": "neg:1840424_17",
"text": "This study examined the relationship between financial knowledge and credit card behavior of college students. The widespread availability of credit cards has raised concerns over how college students might use those cards given the negative consequences (both immediate and long-term) associated with credit abuse and mismanagement. Using a sample of 1,354 students from a major southeastern university, results suggest that financial knowledge is a significant factor in the credit card decisions of college students. Students with higher scores on a measure of personal financial knowledge are more likely to engage in more responsible credit card use. Specific behaviors chosen have been associated with greater costs of borrowing and adverse economic consequences in the past.",
"title": ""
},
{
"docid": "neg:1840424_18",
"text": "Fully-automatic facial expression recognition (FER) is a key component of human behavior analysis. Performing FER from still images is a challenging task as it involves handling large interpersonal morphological differences, and as partial occlusions can occasionally happen. Furthermore, labelling expressions is a time-consuming process that is prone to subjectivity, thus the variability may not be fully covered by the training data. In this work, we propose to train random forests upon spatially-constrained random local subspaces of the face. The output local predictions form a categorical expression-driven high-level representation that we call local expression predictions (LEPs). LEPs can be combined to describe categorical facial expressions as well as action units (AUs). Furthermore, LEPs can be weighted by confidence scores provided by an autoencoder network. Such network is trained to locally capture the manifold of the non-occluded training data in a hierarchical way. Extensive experiments show that the proposed LEP representation yields high descriptive power for categorical expressions and AU occurrence prediction, and leads to interesting perspectives towards the design of occlusion-robust and confidence-aware FER systems.",
"title": ""
},
{
"docid": "neg:1840424_19",
"text": "Osteoarthritis is a common disease, clinically manifested by joint pain, swelling and progressive loss of function. The severity of disease manifestations can vary but most of the patients only need intermittent symptom relief without major interventions. However, there is a group of patients that shows fast progression of the disease process leading to disability and ultimately joint replacement. Apart from symptom relief, no treatments have been identified that arrest or reverse the disease process. Therefore, there has been increasing attention devoted to the understanding of the mechanisms that are driving the disease process. Among these mechanisms, the biology of the cartilage-subchondral bone unit has been highlighted as key in osteoarthritis, and pathways that involve both cartilage and bone formation and turnover have become prime targets for modulation, and thus therapeutic intervention. Studies in developmental, genetic and joint disease models indicate that Wnt signaling is critically involved in these processes. Consequently, targeting Wnt signaling in a selective and tissue specific manner is an exciting opportunity for the development of disease modifying drugs for osteoarthritis.",
"title": ""
}
] |
1840425 | Mental health awareness: The Indian scenario | [
{
"docid": "pos:1840425_0",
"text": "CONTEXT\nLittle is known about the extent or severity of untreated mental disorders, especially in less-developed countries.\n\n\nOBJECTIVE\nTo estimate prevalence, severity, and treatment of Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) mental disorders in 14 countries (6 less developed, 8 developed) in the World Health Organization (WHO) World Mental Health (WMH) Survey Initiative.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nFace-to-face household surveys of 60 463 community adults conducted from 2001-2003 in 14 countries in the Americas, Europe, the Middle East, Africa, and Asia.\n\n\nMAIN OUTCOME MEASURES\nThe DSM-IV disorders, severity, and treatment were assessed with the WMH version of the WHO Composite International Diagnostic Interview (WMH-CIDI), a fully structured, lay-administered psychiatric diagnostic interview.\n\n\nRESULTS\nThe prevalence of having any WMH-CIDI/DSM-IV disorder in the prior year varied widely, from 4.3% in Shanghai to 26.4% in the United States, with an interquartile range (IQR) of 9.1%-16.9%. Between 33.1% (Colombia) and 80.9% (Nigeria) of 12-month cases were mild (IQR, 40.2%-53.3%). Serious disorders were associated with substantial role disability. Although disorder severity was correlated with probability of treatment in almost all countries, 35.5% to 50.3% of serious cases in developed countries and 76.3% to 85.4% in less-developed countries received no treatment in the 12 months before the interview. Due to the high prevalence of mild and subthreshold cases, the number of those who received treatment far exceeds the number of untreated serious cases in every country.\n\n\nCONCLUSIONS\nReallocation of treatment resources could substantially decrease the problem of unmet need for treatment of mental disorders among serious cases. Structural barriers exist to this reallocation. Careful consideration needs to be given to the value of treating some mild cases, especially those at risk for progressing to more serious disorders.",
"title": ""
}
] | [
{
"docid": "neg:1840425_0",
"text": "By understanding how real users have employed reliable multicast in real distributed systems, we can develop insight concerning the degree to which this technology has matched expectations. This paper reviews a number of applications with that goal in mind. Our findings point to tradeoffs between the form of reliability used by a system and its scalability and performance. We also find that to reach a broad user community (and a commercially interesting market) the technology must be better integrated with component and object-oriented systems architectures. Looking closely at these architectures, however, we identify some assumptions about failure handling which make reliable multicast difficult to exploit. Indeed, the major failures of reliable multicast are associated wit failures. The broader opportunity appears to involve relatively visible embeddings of these tools int h attempts to position it within object oriented systems in ways that focus on transparent recovery from server o object-oriented architectures enabling knowledgeable users to make tradeoffs. Fault-tolerance through transparent server replication may be better viewed as an unachievable holy grail.",
"title": ""
},
{
"docid": "neg:1840425_1",
"text": "We define Quality of Service (QoS) and cost model for communications in Systems on Chip (SoC), and derive related Network on Chip (NoC) architecture and design process. SoC inter-module communication traffic is classified into four classes of service: signaling (for inter-module control signals); real-time (representing delay-constrained bit streams); RD/WR (modeling short data access) and block-transfer (handling large data bursts). Communication traffic of the target SoC is analyzed (by means of analytic calculations and simulations), and QoS requirements (delay and throughput) for each service class are derived. A customized Quality-of-Service NoC (QNoC) architecture is derived by modifying a generic network architecture. The customization process minimizes the network cost (in area and power) while maintaining the required QoS. The generic network is based on a two-dimensional planar mesh and fixed shortest path (X–Y based) multi-class wormhole routing. Once communication requirements of the target SoC are identified, the network is customized as follows: The SoC modules are placed so as to minimize spatial traffic density, unnecessary mesh links and switching nodes are removed, and bandwidth is allocated to the remaining links and switches according to their relative load so that link utilization is balanced. The result is a low cost customized QNoC for the target SoC which guarantees that QoS requirements are met. 2003 Elsevier B.V. All rights reserved. IDT: Network on chip; QoS architecture; Wormhole switching; QNoC design process; QNoC",
"title": ""
},
{
"docid": "neg:1840425_2",
"text": "This paper proposes a novel approach for the evolution of artificial creatures which moves in a 3D virtual environment based on the neuroevolution of augmenting topologies (NEAT) algorithm. The NEAT algorithm is used to evolve neural networks that observe the virtual environment and respond to it, by controlling the muscle force of the creature. The genetic algorithm is used to emerge the architecture of creature based on the distance metrics for fitness evaluation. The damaged morphologies of creature are elaborated, and a crossover algorithm is used to control it. Creatures with similar morphological traits are grouped into the same species to limit the complexity of the search space. The motion of virtual creature having 2–3 limbs is recorded at three different angles to check their performance in different types of viscous mediums. The qualitative demonstration of motion of virtual creature represents that improved swimming of virtual creatures is achieved in simulating mediums with viscous drag 1–10 arbitrary unit.",
"title": ""
},
{
"docid": "neg:1840425_3",
"text": "The Internet of Things has drawn lots of research attention as the growing number of devices connected to the Internet. Long Term Evolution-Advanced (LTE-A) is a promising technology for wireless communication and it's also promising for IoT. The main challenge of incorporating IoT devices into LTE-A is a large number of IoT devices attempting to access the network in a short period which will greatly reduce the network performance. In order to improve the network utilization, we adopted a hierarchy architecture using a gateway for connecting the devices to the eNB and proposed a multiclass resource allocation algorithm for LTE based IoT communication. Simulation results show that the proposed algorithm can provide good performance both on data rate and latency for different QoS applications both in saturated and unsaturated environment.",
"title": ""
},
{
"docid": "neg:1840425_4",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "neg:1840425_5",
"text": "Objective A sound theoretical foundation to guide practice is enhanced by the ability of nurses to critique research. This article provides a structured route to questioning the methodology of nursing research. Primary Argument Nurses may find critiquing a research paper a particularly daunting experience when faced with their first paper. Knowing what questions the nurse should be asking is perhaps difficult to determine when there may be unfamiliar research terms to grasp. Nurses may benefit from a structured approach which helps them understand the sequence of the text and the subsequent value of a research paper. Conclusion A framework is provided within this article to assist in the analysis of a research paper in a systematic, logical order. The questions presented in the framework may lead the nurse to conclusions about the strengths and weaknesses of the research methods presented in a research article. The framework does not intend to separate quantitative or qualitative paradigms but to assist the nurse in making broad observations about the nature of the research.",
"title": ""
},
{
"docid": "neg:1840425_6",
"text": "This paper surveys current text and speech summarization evaluation approaches. It discusses advantages and disadv ant ges of these, with the goal of identifying summarization techni ques most suitable to speech summarization. Precision/recall s hemes, as well as summary accuracy measures which incorporate weig htings based on multiple human decisions, are suggested as par ticularly suitable in evaluating speech summaries.",
"title": ""
},
{
"docid": "neg:1840425_7",
"text": "Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.",
"title": ""
},
{
"docid": "neg:1840425_8",
"text": "This article presents a reproducible research workflow for amplicon-based microbiome studies in personalized medicine created using Bioconductor packages and the knitr markdown interface.We show that sometimes a multiplicity of choices and lack of consistent documentation at each stage of the sequential processing pipeline used for the analysis of microbiome data can lead to spurious results. We propose its replacement with reproducible and documented analysis using R packages dada2, knitr, and phyloseq. This workflow implements both key stages of amplicon analysis: the initial filtering and denoising steps needed to construct taxonomic feature tables from error-containing sequencing reads (dada2), and the exploratory and inferential analysis of those feature tables and associated sample metadata (phyloseq). This workow facilitates reproducible interrogation of the full set of choices required in microbiome studies. We present several examples in which we leverage existing packages for analysis in a way that allows easy sharing and modification by others, and give pointers to articles that depend on this reproducible workflow for the study of longitudinal and spatial series analyses of the vaginal microbiome in pregnancy and the oral microbiome in humans with healthy dentition and intra-oral tissues.",
"title": ""
},
{
"docid": "neg:1840425_9",
"text": "Models are crucial in the engineering design process because they can be used for both the optimization of design parameters and the prediction of performance. Thus, models can significantly reduce design, development and optimization costs. This paper proposes a novel equivalent electrical model for Darrieus-type vertical axis wind turbines (DTVAWTs). The proposed model was built from the mechanical description given by the Paraschivoiu double-multiple streamtube model and is based on the analogy between mechanical and electrical circuits. This work addresses the physical concepts and theoretical formulations underpinning the development of the model. After highlighting the working principle of the DTVAWT, the step-by-step development of the model is presented. For assessment purposes, simulations of aerodynamic characteristics and those of corresponding electrical components are performed and compared.",
"title": ""
},
{
"docid": "neg:1840425_10",
"text": "Quinoa (Chenopodium quinoa Willd.), which is considered a pseudocereal or pseudograin, has been recognized as a complete food due to its protein quality. It has remarkable nutritional properties; not only from its protein content (15%) but also from its great amino acid balance. It is an important source of minerals and vitamins, and has also been found to contain compounds like polyphenols, phytosterols, and flavonoids with possible nutraceutical benefits. It has some functional (technological) properties like solubility, water-holding capacity (WHC), gelation, emulsifying, and foaming that allow diversified uses. Besides, it has been considered an oil crop, with an interesting proportion of omega-6 and a notable vitamin E content. Quinoa starch has physicochemical properties (such as viscosity, freeze stability) which give it functional properties with novel uses. Quinoa has a high nutritional value and has recently been used as a novel functional food because of all these properties; it is a promising alternative cultivar.",
"title": ""
},
{
"docid": "neg:1840425_11",
"text": "Every arti cial-intelligence research project needs a working de nition of \\intelligence\", on which the deepest goals and assumptions of the research are based. In the project described in the following chapters, \\intelligence\" is de ned as the capacity to adapt under insu cient knowledge and resources. Concretely, an intelligent system should be nite and open, and should work in real time. If these criteria are used in the design of a reasoning system, the result is NARS, a non-axiomatic reasoning system. NARS uses a term-oriented formal language, characterized by the use of subject{ predicate sentences. The language has an experience-grounded semantics, according to which the truth value of a judgment is determined by previous experience, and the meaning of a term is determined by its relations with other terms. Several di erent types of uncertainty, such as randomness, fuzziness, and ignorance, can be represented in the language in a single way. The inference rules of NARS are based on three inheritance relations between terms. With di erent combinations of premises, revision, deduction, induction, abduction, exempli cation, comparison, and analogy can all be carried out in a uniform format, the major di erence between these types of inference being that di erent functions are used to calculate the truth value of the conclusion from the truth values of the premises. viii ix Since it has insu cient space{time resources, the system needs to distribute them among its tasks very carefully, and to dynamically adjust the distribution as the situation changes. This leads to a \\controlled concurrency\" control mechanism, and a \\bag-based\" memory organization. A recent implementation of the NARS model, with examples, is discussed. The system has many interesting properties that are shared by human cognition, but are absent from conventional computational models of reasoning. This research sheds light on several notions in arti cial intelligence and cognitive science, including symbol-grounding, induction, categorization, logic, and computation. These are discussed to show the implications of the new theory of intelligence. Finally, the major results of the research are summarized, a preliminary evaluation of the working de nition of intelligence is given, and the limitations and future extensions of the research are discussed.",
"title": ""
},
{
"docid": "neg:1840425_12",
"text": "BACKGROUND\nCultivated bananas and plantains are giant herbaceous plants within the genus Musa. They are both sterile and parthenocarpic so the fruit develops without seed. The cultivated hybrids and species are mostly triploid (2n = 3x = 33; a few are diploid or tetraploid), and most have been propagated from mutants found in the wild. With a production of 100 million tons annually, banana is a staple food across the Asian, African and American tropics, with the 15 % that is exported being important to many economies.\n\n\nSCOPE\nThere are well over a thousand domesticated Musa cultivars and their genetic diversity is high, indicating multiple origins from different wild hybrids between two principle ancestral species. However, the difficulty of genetics and sterility of the crop has meant that the development of new varieties through hybridization, mutation or transformation was not very successful in the 20th century. Knowledge of structural and functional genomics and genes, reproductive physiology, cytogenetics, and comparative genomics with rice, Arabidopsis and other model species has increased our understanding of Musa and its diversity enormously.\n\n\nCONCLUSIONS\nThere are major challenges to banana production from virulent diseases, abiotic stresses and new demands for sustainability, quality, transport and yield. Within the genepool of cultivars and wild species there are genetic resistances to many stresses. Genomic approaches are now rapidly advancing in Musa and have the prospect of helping enable banana to maintain and increase its importance as a staple food and cash crop through integration of genetical, evolutionary and structural data, allowing targeted breeding, transformation and efficient use of Musa biodiversity in the future.",
"title": ""
},
{
"docid": "neg:1840425_13",
"text": "Blockchain has drawn attention as the next-generation financial technology due to its security that suits the informatization era. In particular, it provides security through the authentication of peers that share virtual cash, encryption, and the generation of hash value. According to the global financial industry, the market for security-based blockchain technology is expected to grow to about USD 20 billion by 2020. In addition, blockchain can be applied beyond the Internet of Things (IoT) environment; its applications are expected to expand. Cloud computing has been dramatically adopted in all IT environments for its efficiency and availability. In this paper, we discuss the concept of blockchain technology and its hot research trends. In addition, we will study how to adapt blockchain security to cloud computing and its secure solutions in detail.",
"title": ""
},
{
"docid": "neg:1840425_14",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "neg:1840425_15",
"text": "This paper investigates the antecedents and consequences of customer loyalty in an online business-to-consumer (B2C) context. We identify eight factors (the 8Cs—customization, contact interactivity, care, community, convenience, cultivation, choice, and character) that potentially impact e-loyalty and develop scales to measure these factors. Data collected from 1,211 online customers demonstrate that all these factors, except convenience, impact e-loyalty. The data also reveal that e-loyalty has an impact on two customer-related outcomes: word-ofmouth promotion and willingness to pay more. © 2002 by New York University. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840425_16",
"text": "Application programmers increasingly prefer distributed storage systems with strong consistency and distributed transactions (e.g., Google's Spanner) for their strong guarantees and ease of use. Unfortunately, existing transactional storage systems are expensive to use -- in part because they require costly replication protocols, like Paxos, for fault tolerance. In this paper, we present a new approach that makes transactional storage systems more affordable: we eliminate consistency from the replication protocol while still providing distributed transactions with strong consistency to applications.\n We present TAPIR -- the Transactional Application Protocol for Inconsistent Replication -- the first transaction protocol to use a novel replication protocol, called inconsistent replication, that provides fault tolerance without consistency. By enforcing strong consistency only in the transaction protocol, TAPIR can commit transactions in a single round-trip and order distributed transactions without centralized coordination. We demonstrate the use of TAPIR in a transactional key-value store, TAPIR-KV. Compared to conventional systems, TAPIR-KV provides better latency and throughput.",
"title": ""
},
{
"docid": "neg:1840425_17",
"text": "PsV: psoriasis vulgaris INTRODUCTION Pityriasis amiantacea is a rare clinical condition characterized by masses of waxy and sticky scales that adhere to the scalp and tenaciously attach to hair bundles. Pityriasis amiantacea can be associated with psoriasis vulgaris (PsV).We examined a patient with pityriasis amiantacea caused by PsV who also had keratotic horns on the scalp, histopathologically fibrokeratomas. To the best of our knowledge, this is the first case of scalp fibrokeratoma stimulated by pityriasis amiantacea and PsV.",
"title": ""
},
{
"docid": "neg:1840425_18",
"text": "We introduce Anita: a flexible and intelligent Text Adaptation tool for web content that provides Text Simplification and Text Enhancement modules. Anita’s simplification module features a state-of-the-art system that adapts texts according to the needs of individual users, and its enhancement module allows the user to search for a word’s definitions, synonyms, translations, and visual cues through related images. These utilities are brought together in an easy-to-use interface of a freely available web browser extension.",
"title": ""
},
{
"docid": "neg:1840425_19",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] |
1840426 | Disassembling gamification: the effects of points and meaning on user motivation and performance | [
{
"docid": "pos:1840426_0",
"text": "More Americans now play video games than go to the movies (NPD Group, 2009). The meteoric rise in popularity of video games highlights the need for research approaches that can deepen our scientific understanding of video game engagement. This article advances a theory-based motivational model for examining and evaluating the ways by which video game engagement shapes psychological processes and influences well-being. Rooted in self-determination theory (Deci & Ryan, 2000; Ryan & Deci, 2000a), our approach suggests that both the appeal and well-being effects of video games are based in their potential to satisfy basic psychological needs for competence, autonomy, and relatedness. We review recent empirical evidence applying this perspective to a number of topics including need satisfaction in games and short-term well-being, the motivational appeal of violent game content, motivational sources of postplay aggression, the antecedents and consequences of disordered patterns of game engagement, and the determinants and effects of immersion. Implications of this model for the future study of game motivation and the use of video games in interventions are discussed.",
"title": ""
},
{
"docid": "pos:1840426_1",
"text": "We conduct a natural field experiment that explores the relationship between the “meaningfulness” of a task and people’s willingness to work. Our study uses workers from Amazon’s Mechanical Turk (MTurk), an online marketplace for task-based work. All participants are given an identical task of labeling medical images. However, the task is presented differently depending on treatment. Subjects assigned to the meaningful treatment are told they would be helping researchers label tumor cells, whereas subjects in the zero-context treatment are not told the purpose of their task and only told that they would be labeling “objects of interest”. Our experimental design specifically hires US and Indian workers in order to test for heterogeneous effects. We find that US, but not Indian, workers are induced to work at a higher proportion when given cues that their task was meaningful. However, conditional on working, whether a task was framed as meaningful does not induce greater or higher quality output in either the US or in India.",
"title": ""
},
{
"docid": "pos:1840426_2",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "neg:1840426_0",
"text": "Action anticipation aims to detect an action before it happens. Many real world applications in robotics and surveillance are related to this predictive capability. Current methods address this problem by first anticipating visual representations of future frames and then categorizing the anticipated representations to actions. However, anticipation is based on a single past frame’s representation, which ignores the history trend. Besides, it can only anticipate a fixed future time. We propose a Reinforced Encoder-Decoder (RED) network for action anticipation. RED takes multiple history representations as input and learns to anticipate a sequence of future representations. One salient aspect of RED is that a reinforcement module is adopted to provide sequence-level supervision; the reward function is designed to encourage the system to make correct predictions as early as possible. We test RED on TVSeries, THUMOS-14 and TV-Human-Interaction datasets for action anticipation and achieve state-of-the-art performance on all datasets.",
"title": ""
},
{
"docid": "neg:1840426_1",
"text": "This study presents and examines SamEx, a mobile learning system used by 305 students in formal and informal learning in a primary school in Singapore. Students use SamEx in situ to capture media such as pictures, video clips and audio recordings, comment on them, and share them with their peers. In this paper we report on the experiences of students in using the application throughout a one-year period with a focus on self-directedness, quality of contributions, and answers to contextual question prompts. We examine how the usage of tools such as SamEx predicts students' science examination results, discuss the role of badges as an extrinsic motivational tool, and explore how individual and collaborative learning emerge. Our research shows that the quantity and quality of contributions provided by the students in SamEx predict the end-year assessment score. With respect to specific system features, contextual answers given by the students and the overall likes received by students are also correlated with the end-year assessment score. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840426_2",
"text": "People increasingly use smartwatches in tandem with other devices such as smartphones, laptops or tablets. This allows for novel cross-device applications that use the watch as both input device and output display. However, despite the increasing availability of smartwatches, prototyping cross-device watch-centric applications remains a challenging task. Developers are limited in the applications they can explore as available toolkits provide only limited access to different types of input sensors for cross-device interactions. To address this problem, we introduce WatchConnect, a toolkit for rapidly prototyping cross-device applications and interaction techniques with smartwatches. The toolkit provides developers with (i) an extendable hardware platform that emulates a smartwatch, (ii) a UI framework that integrates with an existing UI builder, and (iii) a rich set of input and output events using a range of built-in sensor mappings. We demonstrate the versatility and design space of the toolkit with five interaction techniques and applications.",
"title": ""
},
{
"docid": "neg:1840426_3",
"text": "This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.",
"title": ""
},
{
"docid": "neg:1840426_4",
"text": "This paper reviews the technology trends of BCD (Bipolar-CMOS-DMOS) technology in terms of voltage capability, switching speed of power transistor, and high integration of logic CMOS for SoC (System-on-Chip) solution requiring high-voltage devices. Recent trends such like modularity of the process, power metal routing, and high-density NVM (Non-Volatile Memory) are also discussed.",
"title": ""
},
{
"docid": "neg:1840426_5",
"text": "Images captured in low-light conditions usually suffer from very low contrast, which increases the difficulty of subsequent computer vision tasks in a great extent. In this paper, a low-light image enhancement model based on convolutional neural network and Retinex theory is proposed. Firstly, we show that multi-scale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. Motivated by this fact, we consider a Convolutional Neural Network(MSR-net) that directly learns an end-to-end mapping between dark and bright images. Different fundamentally from existing approaches, low-light image enhancement in this paper is regarded as a machine learning problem. In this model, most of the parameters are optimized by back-propagation, while the parameters of traditional models depend on the artificial setting. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods from the qualitative and quantitative perspective.",
"title": ""
},
{
"docid": "neg:1840426_6",
"text": "Nowadays, World Wide Web (WWW) surfing is becoming a risky task with the Web becoming rich in all sorts of attack. Websites are the main source of many scams, phishing attacks, identity theft, SPAM commerce and malware. Nevertheless, browsers, blacklists, and popup blockers are not enough to protect users. According to this, fast and accurate systems still to be needed with the ability to detect new malicious content. By taking into consideration, researchers have developed various Malicious Website detection techniques in recent years. Analyzing those works available in the literature can provide good knowledge on this topic and also, it will lead to finding the recent problems in Malicious Website detection. Accordingly, I have planned to do a comprehensive study with the literature of Malicious Website detection techniques. To categorize the techniques, all articles that had the word “malicious detection” in its title or as its keyword published between January 2003 to august 2016, is first selected from the scientific journals: IEEE, Elsevier, Springer and international journals. After the collection of research articles, we discuss every research paper. In addition, this study gives an elaborate idea about malicious detection.",
"title": ""
},
{
"docid": "neg:1840426_7",
"text": "Support vector machines (SVMs) have been recognized as one o f th most successful classification methods for many applications including text classific ation. Even though the learning ability and computational complexity of training in support vector machines may be independent of the dimension of the feature space, reducing computational com plexity is an essential issue to efficiently handle a large number of terms in practical applicat ions of text classification. In this paper, we adopt novel dimension reduction methods to reduce the dim nsion of the document vectors dramatically. We also introduce decision functions for the centroid-based classification algorithm and support vector classifiers to handle the classification p r blem where a document may belong to multiple classes. Our substantial experimental results sh ow t at with several dimension reduction methods that are designed particularly for clustered data, higher efficiency for both training and testing can be achieved without sacrificing prediction accu ra y of text classification even when the dimension of the input space is significantly reduced.",
"title": ""
},
{
"docid": "neg:1840426_8",
"text": "One of the themes of Emotion and Decision-Making Explained (Rolls, 2014c) is that there are multiple routes to emotionrelated responses, with some illustrated in Fig. 1. Brain systems involved in decoding stimuli in terms of whether they are instrumental reinforcers so that goal directed actions may be performed to obtain or avoid the stimuli are emphasized as being important for emotional states, for an intervening state may be needed to bridge the time gap between the decoding of a goal-directed stimulus, and the actions that may need to be set into train and directed to obtain or avoid the emotionrelated stimulus. In contrast, when unconditioned or classically conditioned responses such as autonomic responses, freezing, turning away etc. are required, there is no need for intervening states such as emotional states. These points are covered in Chapters 2e4 and 10 of the book. Ono and Nishijo (2014) raise the issue of the extent to which subcortical pathways are involved in the elicitation of some of these emotion-related responses. They describe interesting research that pulvinar neurons in macaques may respond to snakes, and may provide a route that does not require cortical processing for some probably innately specified visual stimuli to produce responses. With respect to Fig. 1, the pathway is that some of the inputs labeled as primary reinforcers may reach brain regions including the amygdala by a subcortical route. LeDoux (2012) provides evidence in the same direction, in his case involving a ‘low road’ for auditory stimuli such as tones (which do not required cortical processing) to reach, via a subcortical pathway, the amygdala, where classically conditioned e.g., freezing and autonomic responses may be learned. Consistently, there is evidence (Chapter 4) that humans with damage to the primary visual cortex who describe themselves as blind do nevertheless show some responses to stimuli such as a face expression (de Gelder, Vroomen, Pourtois, & Weiskrantz, 1999; Tamietto et al., 2009; Tamietto & de Gelder, 2010). I agree that the elicitation of unconditioned and conditioned responses to these particular types of stimuli (LeDoux, 2014) is of interest (Rolls, 2014a). However, in Emotion and Decision-Making Explained, I emphasize that there aremassive cortical inputs to structures involved in emotion such as the amygdala and orbitofrontal cortex, and that neurons in both structures can have viewinvariant responses to visual stimuli including faces which specify face identity, and can have responses that are selective for particular emotional expressions (Leonard, Rolls, Wilson, & Baylis, 1985; Rolls, 1984, 2007, 2011, 2012; Rolls, Critchley, Browning, & Inoue, 2006) which reflect the neuronal responses found in the temporal cortical and related visual areas, as we discovered (Perrett, Rolls, & Caan, 1982; Rolls, 2007, 2008a, 2011, 2012; Sanghera, Rolls, & Roper-Hall, 1979). View invariant representations are important for",
"title": ""
},
{
"docid": "neg:1840426_9",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
},
{
"docid": "neg:1840426_10",
"text": "This paper describes a general framework for learning Higher-Order Network Embeddings (HONE) from graph data based on network motifs. The HONE framework is highly expressive and flexible with many interchangeable components. The experimental results demonstrate the effectiveness of learning higher-order network representations. In all cases, HONE outperforms recent embedding methods that are unable to capture higher-order structures with a mean relative gain in AUC of 19% (and up to 75% gain) across a wide variety of networks and embedding methods.",
"title": ""
},
{
"docid": "neg:1840426_11",
"text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.",
"title": ""
},
{
"docid": "neg:1840426_12",
"text": "While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly. To increase the practicality of these techniques on real robots, we propose a modular deep reinforcement learning method capable of transferring models trained in simulation to a real-world robotic task. We introduce a bottleneck between perception and control, enabling the networks to be trained independently, but then merged and fine-tuned in an end-to-end manner to further improve hand-eye coordination. On a canonical, planar visually-guided robot reaching task a fine-tuned accuracy of 1.6 pixels is achieved, a significant improvement over naive transfer (17.5 pixels), showing the potential for more complicated and broader applications. Our method provides a technique for more efficient learning and transfer of visuomotor policies for real robotic systems without relying entirely on large real-world robot datasets.",
"title": ""
},
{
"docid": "neg:1840426_13",
"text": "The automata-theoretic approach to linear temporal logic uses the theory of automata as a unifying paradigm for program specification, verification, and synthesis. Both programs and specifications are in essence descriptions of computations. These computations can be viewed as words over some alphabet. Thus,programs and specificationscan be viewed as descriptions of languagesover some alphabet. The automata-theoretic perspective considers the relationships between programs and their specifications as relationships between languages.By translating programs and specifications to automata, questions about programs and their specifications can be reduced to questions about automata. More specifically, questions such as satisfiability of specifications and correctness of programs with respect to their specifications can be reduced to questions such as nonemptiness and containment of automata. Unlike classical automata theory, which focused on automata on finite words, the applications to program specification, verification, and synthesis, use automata on infinite words, since the computations in which we are interested are typically infinite. This paper provides an introduction to the theory of automata on infinite words and demonstrates its applications to program specification, verification, and synthesis.",
"title": ""
},
{
"docid": "neg:1840426_14",
"text": "During the last decade, the applications of signal processing have drastically improved with deep learning. However areas of affecting computing such as emotional speech synthesis or emotion recognition from spoken language remains challenging. In this paper, we investigate the use of a neural Automatic Speech Recognition (ASR) as a feature extractor for emotion recognition. We show that these features outperform the eGeMAPS feature set to predict the valence and arousal emotional dimensions, which means that the audio-to-text mapping learned by the ASR system contains information related to the emotional dimensions in spontaneous speech. We also examine the relationship between first layers (closer to speech) and last layers (closer to text) of the ASR and valence/arousal.",
"title": ""
},
{
"docid": "neg:1840426_15",
"text": "This paper presents the design of the robot AILA, a mobile dual-arm robot system developed as a research platform for investigating aspects of the currently booming multidisciplinary area of mobile manipulation. The robot integrates and allows in a single platform to perform research in most of the areas involved in autonomous robotics: navigation, mobile and dual-arm manipulation planning, active compliance and force control strategies, object recognition, scene representation, and semantic perception. AILA has 32 degrees of freedom, including 7-DOF arms, 4-DOF torso, 2-DOF head, and a mobile base equipped with six wheels, each of them with two degrees of freedom. The primary design goal was to achieve a lightweight arm construction with a payload-to-weight ratio greater than one. Besides, an adjustable body should sustain the dual-arm system providing an extended workspace. In addition, mobility is provided by means of a wheel-based mobile base. As a result, AILA's arms can lift 8kg and weigh 5.5kg, thus achieving a payload-to-weight ratio of 1.45. The paper will provide an overview of the design, especially in the mechatronics area, as well as of its realization, the sensors incorporated in the system, and its control software.",
"title": ""
},
{
"docid": "neg:1840426_16",
"text": "The phosphor deposits of the β-sialon:Eu2+ mixed with various amounts (0-1 g) of the SnO₂ nanoparticles were fabricated by the electrophoretic deposition (EPD) process. The mixed SnO₂ nanoparticles was observed to cover onto the particle surfaces of the β-sialon:Eu2+ as well as fill in the voids among the phosphor particles. The external and internal quantum efficiencies (QEs) of the prepared deposits were found to be dependent on the mixing amount of the SnO₂: by comparing with the deposit without any mixing (48% internal and 38% external QEs), after mixing the SnO₂ nanoparticles, the both QEs were improved to 55% internal and 43% external QEs at small mixing amount (0.05 g); whereas, with increasing the mixing amount to 0.1 and 1 g, they were reduced to 36% and 29% for the 0.1 g addition and 15% and 12% l QEs for the 1 g addition. More interestingly, tunable color appearances of the deposits prepared by the EPD process were achieved, from yellow green to blue, by varying the addition amount of the SnO₂, enabling it as an alternative technique instead of altering the voltage and depositing time for the color appearance controllability.",
"title": ""
},
{
"docid": "neg:1840426_17",
"text": "Beam scanning arrays typically suffer from scan loss; an increasing degradation in gain as the beam is scanned from broadside toward the horizon in any given scan plane. Here, a metasurface is presented that reduces the effects of scan loss for a leaky-wave antenna (LWA). The metasurface is simple, being composed of an ultrathin sheet of subwavelength split-ring resonators. The leaky-wave structure is balanced, scanning from the forward region, through broadside, and into the backward region, and designed to scan in the magnetic plane. The metasurface is effectively invisible at broadside, where balanced LWAs are most sensitive to external loading. It is shown that the introduction of the metasurface results in increased directivity, and hence, gain, as the beam is scanned off broadside, having an increasing effect as the beam is scanned to the horizon. Simulations show that the metasurface improves the effective aperture distribution at higher scan angles, resulting in a more directive main beam, while having a negligible impact on cross-polarization gain. Experimental validation results show that the scan range of the antenna is increased from $-39 {^{\\circ }} \\leq \\theta \\leq +32 {^{\\circ }}$ to $-64 {^{\\circ }} \\leq \\theta \\leq +70 {^{\\circ }}$ , when loaded with the metasurface, demonstrating a flattened gain profile over a 135° range centered about broadside. Moreover, this scan range occurs over a frequency band spanning from 9 to 15.5 GHz, demonstrating a relative bandwidth of 53% for the metasurface.",
"title": ""
},
{
"docid": "neg:1840426_18",
"text": "Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed.",
"title": ""
},
{
"docid": "neg:1840426_19",
"text": "Epigenome-wide association studies represent one means of applying genome-wide assays to identify molecular events that could be associated with human phenotypes. The epigenome is especially intriguing as a target for study, as epigenetic regulatory processes are, by definition, heritable from parent to daughter cells and are found to have transcriptional regulatory properties. As such, the epigenome is an attractive candidate for mediating long-term responses to cellular stimuli, such as environmental effects modifying disease risk. Such epigenomic studies represent a broader category of disease -omics, which suffer from multiple problems in design and execution that severely limit their interpretability. Here we define many of the problems with current epigenomic studies and propose solutions that can be applied to allow this and other disease -omics studies to achieve their potential for generating valuable insights.",
"title": ""
}
] |
1840427 | Adaptability of Neural Networks on Varying Granularity IR Tasks | [
{
"docid": "pos:1840427_0",
"text": "In this paper we address the following problem in web document and information retrieval (IR): How can we use long-term context information to gain better IR performance? Unlike common IR methods that use bag of words representation for queries and documents, we treat them as a sequence of words and use long short term memory (LSTM) to capture contextual dependencies. To the best of our knowledge, this is the first time that LSTM is applied to information retrieval tasks. Unlike training traditional LSTMs, the training strategy is different due to the special nature of information retrieval problem. Experimental evaluation on an IR task derived from the Bing web search demonstrates the ability of the proposed method in addressing both lexical mismatch and long-term context modelling issues, thereby, significantly outperforming existing state of the art methods for web document retrieval task.",
"title": ""
},
{
"docid": "pos:1840427_1",
"text": "We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains. Various architectures are presented and compared. We create and release a QA corpus and setup a new QA task in the insurance domain. Experimental results demonstrate superior performance compared to the baseline methods and various technologies give further improvements. For this highly challenging task, the top-1 accuracy can reach up to 65.3% on a test set, which indicates a great potential for practical use.",
"title": ""
},
{
"docid": "pos:1840427_2",
"text": "In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"title": ""
},
{
"docid": "pos:1840427_3",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
}
] | [
{
"docid": "neg:1840427_0",
"text": "Software Defined Network (SDN) facilitates network programmers with easier network monitoring, identification of anomalies, instant implementation of changes, central control to the whole network in a cost effective and efficient manner. These features could be beneficial for securing and maintaining entire network. Being a promising network paradigm, it draws a lot of attention from researchers in security domain. But it's logically centralized control tends to single point of failure, increasing the risk of attacks such as Distributed Denial of Service (DDoS) attack. In this paper, we have tried to identify various possibilities of DDoS attacks in SDN environment with the help of attack tree and an attack model. Further, an attempt to analyze the impact of various traditional DDoS attacks on SDN components is done. Such analysis helps in identifying the type of DDoS attacks that impose bigger threat on SDN architecture and also the features that could play important role in identification of these attacks are deduced.",
"title": ""
},
{
"docid": "neg:1840427_1",
"text": "We investigated teacher versus student seat selection in the context of group and individual seating arrangements. Disruptive behavior during group seating occurred at twice the rate when students chose their seats than when the teacher chose. During individual seating, disruptive behavior occurred more than three times as often when the students chose their seats. The results are discussed in relation to choice and the matching law.",
"title": ""
},
{
"docid": "neg:1840427_2",
"text": "In this study, we propose a novel, lightweight approach to real-time detection of vehicles using parts at intersections. Intersections feature oncoming, preceding, and cross traffic, which presents challenges for vision-based vehicle detection. Ubiquitous partial occlusions further complicate the vehicle detection task, and occur when vehicles enter and leave the camera's field of view. To confront these issues, we independently detect vehicle parts using strong classifiers trained with active learning. We match part responses using a learned matching classification. The learning process for part configurations leverages user input regarding full vehicle configurations. Part configurations are evaluated using Support Vector Machine classification. We present a comparison of detection results using geometric image features and appearance-based features. The full vehicle detection by parts has been evaluated on real-world data, runs in real time, and shows promise for future work in urban driver assistance.",
"title": ""
},
{
"docid": "neg:1840427_3",
"text": "Once generated, neurons are thought to permanently exit the cell cycle and become irreversibly differentiated. However, neither the precise point at which this post-mitotic state is attained nor the extent of its irreversibility is clearly defined. Here we report that newly born neurons from the upper layers of the mouse cortex, despite initiating axon and dendrite elongation, continue to drive gene expression from the neural progenitor tubulin α1 promoter (Tα1p). These observations suggest an ambiguous post-mitotic neuronal state. Whole transcriptome analysis of sorted upper cortical neurons further revealed that neurons continue to express genes related to cell cycle progression long after mitotic exit until at least post-natal day 3 (P3). These genes are however down-regulated thereafter, associated with a concomitant up-regulation of tumor suppressors at P5. Interestingly, newly born neurons located in the cortical plate (CP) at embryonic day 18-19 (E18-E19) and P3 challenged with calcium influx are found in S/G2/M phases of the cell cycle, and still able to undergo division at E18-E19 but not at P3. At P5 however, calcium influx becomes neurotoxic and leads instead to neuronal loss. Our data delineate an unexpected flexibility of cell cycle control in early born neurons, and describe how neurons transit to a post-mitotic state.",
"title": ""
},
{
"docid": "neg:1840427_4",
"text": "Radio Frequency Identification (RFID) systems aim to identify objects in open environments with neither physical nor visual contact. They consist of transponders inserted into objects, of readers, and usually of a database which contains information about the objects. The key point is that authorised readers must be able to identify tags without an adversary being able to trace them. Traceability is often underestimated by advocates of the technology and sometimes exaggerated by its detractors. Whatever the true picture, this problem is a reality when it blocks the deployment of this technology and some companies, faced with being boycotted, have already abandoned its use. Using cryptographic primitives to thwart the traceability issues is an approach which has been explored for several years. However, the research carried out up to now has not provided satisfactory results as no universal formalism has been defined. In this paper, we propose an adversarial model suitable for RFID environments. We define the notions of existential and universal untraceability and we model the access to the communication channels from a set of oracles. We show that our formalisation fits the problem being considered and allows a formal analysis of the protocols in terms of traceability. We use our model on several well-known RFID protocols and we show that most of them have weaknesses and are vulnerable to traceability.",
"title": ""
},
{
"docid": "neg:1840427_5",
"text": "This is an investigation of \" Online Creativity. \" I will present a new account of the cognitive and social mechanisms underlying complex thinking of creative scientists as they work on significant problems in contemporary science. I will lay out an innovative methodology that I have developed for investigating creative and complex thinking in a real-world context. Using this method, I have discovered that there are a number of strategies that are used in contemporary science that increase the likelihood of scientists making discoveries. The findings reported in this chapter provide new insights into complex scientific thinking and will dispel many of the myths surrounding the generation of new concepts and scientific discoveries. InVivo cognition: A new way of investigating cognition There is a large background in cognitive research on thinking, reasoning and problem solving processes that form the foundation for creative cognition (see Dunbar, in press, Holyoak 1996 for recent reviews). However, to a large extent, research on reasoning has demonstrated that subjects in psychology experiments make vast numbers of thinking and reasoning errors even in the most simple problems. How is creative thought even possible if people make so many reasoning errors? One problem with research on reasoning is that the concepts and stimuli that the subjects are asked to use are often arbitrary and involve no background knowledge (cf. Dunbar, 1995; Klahr & Dunbar, 1988). I have proposed that one way of determining what reasoning errors are specific and which are general is to investigate cognition in the cognitive laboratory and the real world (Dunbar, 1995). Psychologists should conduct both InVitro and InVivo research to understand thinking. InVitro research is the standard psychological experiment where subjects are brought into the laboratory and controlled experiments are conducted. As can be seen from the research reported in this volume, this approach yields many insights into the psychological mechanisms underlying complex thinking. The use of an InVivo methodology in which online thinking and reasoning are investigated in a real-world context yields fundamental insights into the basic cognitive mechanisms underlying complex cognition and creativity. The results of InVivo cognitive research can then be used as a basis for further InVitro work in which controlled experiments are conducted. In this chapter, I will outline some of the results of my ongoing InVivo research on creative scientific thinking and relate this research back to the more common InVitro research and show that the …",
"title": ""
},
{
"docid": "neg:1840427_6",
"text": "Accurate cardinality estimates are essential for a successful query optimization. This is not only true for relational DBMSs but also for RDF stores. An RDF database consists of a set of triples and, hence, can be seen as a relational database with a single table with three attributes. This makes RDF rather special in that queries typically contain many self joins. We show that relational DBMSs are not well-prepared to perform cardinality estimation in this context. Further, there are hardly any special cardinality estimation methods for RDF databases. To overcome this lack of appropriate cardinality estimation methods, we introduce characteristic sets together with new cardinality estimation methods based upon them. We then show experimentally that the new methods are-in the RDF context-highly superior to the estimation methods employed by commercial DBMSs and by the open-source RDF store RDF-3X.",
"title": ""
},
{
"docid": "neg:1840427_7",
"text": "Currently, the use of virtual reality (VR) is being widely applied in different fields, especially in computer science, engineering, and medicine. Concretely, the engineering applications based on VR cover approximately one half of the total number of VR resources (considering the research works published up to last year, 2016). In this paper, the capabilities of different computational software for designing VR applications in engineering education are discussed. As a result, a general flowchart is proposed as a guide for designing VR resources in any application. It is worth highlighting that, rather than this study being based on the applications used in the engineering field, the obtained results can be easily extrapolated to other knowledge areas without any loss of generality. This way, this paper can serve as a guide for creating a VR application.",
"title": ""
},
{
"docid": "neg:1840427_8",
"text": "We present a generic method for augmenting unsupervised query segmentation by incorporating Parts-of-Speech (POS) sequence information to detect meaningful but rare n-grams. Our initial experiments with an existing English POS tagger employing two different POS tagsets and an unsupervised POS induction technique specifically adapted for queries show that POS information can significantly improve query segmentation performance in all these cases.",
"title": ""
},
{
"docid": "neg:1840427_9",
"text": "Airbnb, an online marketplace for accommodations, has experienced a staggering growth accompanied by intense debates and scattered regulations around the world. Current discourses, however, are largely focused on opinions rather than empirical evidences. Here, we aim to bridge this gap by presenting the first large-scale measurement study on Airbnb, using a crawled data set containing 2.3 million listings, 1.3 million hosts, and 19.3 million reviews. We measure several key characteristics at the heart of the ongoing debate and the sharing economy. Among others, we find that Airbnb has reached a global yet heterogeneous coverage. The majority of its listings across many countries are entire homes, suggesting that Airbnb is actually more like a rental marketplace rather than a spare-room sharing platform. Analysis on star-ratings reveals that there is a bias toward positive ratings, amplified by a bias toward using positive words in reviews. The extent of such bias is greater than Yelp reviews, which were already shown to exhibit a positive bias. We investigate a key issue - commercial hosts who own multiple listings on Airbnb - repeatedly discussed in the current debate. We find that their existence is prevalent, they are early movers towards joining Airbnb, and their listings are disproportionately entire homes and located in the US. Our work advances the current understanding of how Airbnb is being used and may serve as an independent and empirical reference to inform the debate.",
"title": ""
},
{
"docid": "neg:1840427_10",
"text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.",
"title": ""
},
{
"docid": "neg:1840427_11",
"text": "Electric solid propellants are an attractive option for space propulsion because they are ignited by applied electric power only. In this work, the behavior of pulsed microthruster devices utilizing such a material is investigated. These devices are similar in function and operation to the pulsed plasma thruster, which typically uses Teflon as propellant. A Faraday probe, Langmuir triple probe, residual gas analyzer, pendulum thrust stand and high speed camera are utilized as diagnostic devices. These thrusters are made in batches, of which a few devices were tested experimentally in vacuum environments. Results indicate a plume electron temperature of about 1.7 eV, with an electron density between 10 and 10 cm. According to thermal equilibrium and adiabatic expansion calculations, these relatively hot electrons are mixed with ~2000 K neutral and ion species, forming a non-equilibrium gas. From time-of-flight analysis, this gas mixture plume has an effective velocity of 1500-1650 m/s on centerline. The ablated mass of this plume is 215 μg on average, of which an estimated 0.3% is ionized species while 45±11% is ablated at negligible relative speed. This late-time ablation occurs on a time scale three times that of the 0.5 ms pulse discharge, and does not contribute to the measured 0.21 mN-s impulse per pulse. Similar values have previously been measured in pulsed plasma thrusters. These observations indicate the electric solid propellant material in this configuration behaves similar to Teflon in an electrothermal pulsed plasma",
"title": ""
},
{
"docid": "neg:1840427_12",
"text": "Virtual machine placement (VMP) and energy efficiency are significant topics in cloud computing research. In this paper, evolutionary computing is applied to VMP to minimize the number of active physical servers, so as to schedule underutilized servers to save energy. Inspired by the promising performance of the ant colony system (ACS) algorithm for combinatorial problems, an ACS-based approach is developed to achieve the VMP goal. Coupled with order exchange and migration (OEM) local search techniques, the resultant algorithm is termed an OEMACS. It effectively minimizes the number of active servers used for the assignment of virtual machines (VMs) from a global optimization perspective through a novel strategy for pheromone deposition which guides the artificial ants toward promising solutions that group candidate VMs together. The OEMACS is applied to a variety of VMP problems with differing VM sizes in cloud environments of homogenous and heterogeneous servers. The results show that the OEMACS generally outperforms conventional heuristic and other evolutionary-based approaches, especially on VMP with bottleneck resource characteristics, and offers significant savings of energy and more efficient use of different resources.",
"title": ""
},
{
"docid": "neg:1840427_13",
"text": "n engl j med 368;26 nejm.org june 27, 2013 2445 correct diagnoses as often as we think: the diagnostic failure rate is estimated to be 10 to 15%. The rate is highest among specialties in which patients are diagnostically undifferentiated, such as emergency medicine, family medicine, and internal medicine. Error in the visual specialties, such as radiology and pathology, is considerably lower, probably around 2%.1 Diagnostic error has multiple causes, but principal among them are cognitive errors. Usually, it’s not a lack of knowledge that leads to failure, but problems with the clinician’s thinking. Esoteric diagnoses are occasionally missed, but common illnesses are commonly misdiagnosed. For example, physicians know the pathophysiology of pulmonary embolus in excruciating detail, yet because its signs and symptoms are notoriously variable and overlap with those of numerous other diseases, this important diagnosis was missed a staggering 55% of the time in a series of fatal cases.2 Over the past 40 years, work by cognitive psychologists and others has pointed to the human mind’s vulnerability to cognitive biases, logical fallacies, false assumptions, and other reasoning failures. It seems that much of our everyday thinking is f lawed, and clinicians are not immune to the problem (see box). More than 100 biases affecting clinical decision making have been described, and many medical disciplines now acknowledge their pervasive influence on our thinking. Cognitive failures are best understood in the context of how our brains manage and process information. The two principal modes, automatic and controlled, are colloquially referred to as “intuitive” and “analytic”; psychologists know them as Type 1 and Type 2 processes. Various conceptualizations of the reasoning process have been proposed, but most can be incorporated into this dual-process system. This system is more than a model: it is accepted that the two processes involve different cortical mechanisms with associated neurophysiologic and neuroanatomical From Mindless to Mindful Practice — Cognitive Bias and Clinical Decision Making",
"title": ""
},
{
"docid": "neg:1840427_14",
"text": "This work describes the statistical machine translation (SMT) systems of RWTH Aachen University developed for the evaluation campaign International Workshop on Spoken Language Translation (IWSLT) 2013. We participated in the English→French, English↔German, Arabic→English, Chinese→English and Slovenian↔English MT tracks and the English→French and English→German SLT tracks. We apply phrase-based and hierarchical SMT decoders, which are augmented by state-of-the-art extensions. The novel techniques we experimentally evaluate include discriminative phrase training, a continuous space language model, a hierarchical reordering model, a word class language model, domain adaptation via data selection and system combination of standard and reverse order models. By application of these methods we can show considerable improvements over the respective baseline systems.",
"title": ""
},
{
"docid": "neg:1840427_15",
"text": "The paper concentrates on the fundamental coordination problem that requires a network of agents to achieve a specific but arbitrary formation shape. A new technique based on complex Laplacian is introduced to address the problems of which formation shapes specified by inter-agent relative positions can be formed and how they can be achieved with distributed control ensuring global stability. Concerning the first question, we show that all similar formations subject to only shape constraints are those that lie in the null space of a complex Laplacian satisfying certain rank condition and that a formation shape can be realized almost surely if and only if the graph modeling the inter-agent specification of the formation shape is 2-rooted. Concerning the second question, a distributed and linear control law is developed based on the complex Laplacian specifying the target formation shape, and provable existence conditions of stabilizing gains to assign the eigenvalues of the closed-loop system at desired locations are given. Moreover, we show how the formation shape control law is extended to achieve a rigid formation if a subset of knowledgable agents knowing the desired formation size scales the formation while the rest agents do not need to re-design and change their control laws.",
"title": ""
},
{
"docid": "neg:1840427_16",
"text": "s: Feature selection methods try to find a subset of the available features to improve the application of a learning algorithm. Many methods are based on searching a feature set that optimizes some evaluation function. On the other side, feature set estimators evaluate features individually. Relief is a well known and good feature set estimator. While being usually faster feature estimators have some disadvantages. Based on Relief ideas, we propose a feature set measure that can be used to evaluate the feature sets in a search process. We show how the proposed measure can help guiding the search process, as well as selecting the most appropriate feature set. The new measure is compared with a consistency measure, and the highly reputed wrapper approach.",
"title": ""
},
{
"docid": "neg:1840427_17",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "neg:1840427_18",
"text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.",
"title": ""
},
{
"docid": "neg:1840427_19",
"text": "We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate. The feasibility of this attack introduce a new domain to study adversarial examples.",
"title": ""
}
] |
1840428 | Map-Reduce for Machine Learning on Multicore | [
{
"docid": "pos:1840428_0",
"text": "This article is reprinted from the Internaional Electron Devices Meeting (1975). It discusses the complexity of integrated circuits, identifies their manufacture, production, and deployment, and addresses trends to their future deployment.",
"title": ""
}
] | [
{
"docid": "neg:1840428_0",
"text": "On the basis of a longitudinal field study of domestic communication, we report some essential constituents of the user experience of awareness of others who are distant in space or time, i.e. presence-in-absence. We discuss presence-in-absence in terms of its social (Contact) and informational (Content) facets, and the circumstances of the experience (Context). The field evaluation of a prototype, 'The Cube', designed to support presence-in-absence, threw up issues in the interrelationships between contact, content and context; issues that the designers of similar social artifacts will need to address.",
"title": ""
},
{
"docid": "neg:1840428_1",
"text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.",
"title": ""
},
{
"docid": "neg:1840428_2",
"text": "Unsupervised paraphrase acquisition has been an active research field in recent years, but its effective coverage and performance have rarely been evaluated. We propose a generic paraphrase-based approach for Relation Extraction (RE), aiming at a dual goal: obtaining an applicative evaluation scheme for paraphrase acquisition and obtaining a generic and largely unsupervised configuration for RE. We analyze the potential of our approach and evaluate an implemented prototype of it using an RE dataset. Our findings reveal a high potential for unsupervised paraphrase acquisition. We also identify the need for novel robust models for matching paraphrases in texts, which should address syntactic complexity and variability.",
"title": ""
},
{
"docid": "neg:1840428_3",
"text": "In order to understand the formation and subsequent evolution of galaxies one must first distinguish between the two main morphological classes of massive systems: spirals and early-type systems. This paper introduces a project, Galaxy Zoo, which provides visual morphological classifications for nearly one million galaxies, extracted from the Sloan Digital Sky Survey (SDSS). This achievement was made possible by inviting the general public to visually inspect and classify these galaxies via the internet. The project has obtained more than 4 × 107 individual classifications made by ∼105 participants. We discuss the motivation and strategy for this project, and detail how the classifications were performed and processed. We find that Galaxy Zoo results are consistent with those for subsets of SDSS galaxies classified by professional astronomers, thus demonstrating that our data provide a robust morphological catalogue. Obtaining morphologies by direct visual inspection avoids introducing biases associated with proxies for morphology such as colour, concentration or structural parameters. In addition, this catalogue can be used to directly compare SDSS morphologies with older data sets. The colour–magnitude diagrams for each morphological class are shown, and we illustrate how these distributions differ from those inferred using colour alone as a proxy for",
"title": ""
},
{
"docid": "neg:1840428_4",
"text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.",
"title": ""
},
{
"docid": "neg:1840428_5",
"text": "A labeled text corpus made up of Turkish papers' titles, abstracts and keywords is collected. The corpus includes 35 number of different disciplines, and 200 documents per subject. This study presents the text corpus' collection and content. The classification performance of Term Frequcney - Inverse Document Frequency (TF-IDF) and topic probabilities of Latent Dirichlet Allocation (LDA) features are compared for the text corpus. The text corpus is shared as open source so that it could be used for natural language processing applications with academic purposes.",
"title": ""
},
{
"docid": "neg:1840428_6",
"text": "Use of information technology management framework plays a major influence on organizational success. This article focuses on the field of Internet of Things (IoT) management. In this study, a number of risks in the field of IoT is investigated, then with review of a number of COBIT5 risk management schemes, some associated strategies, objectives and roles are provided. According to the in-depth studies of this area it is expected that using the best practices of COBIT5 can be very effective, while the use of this standard considerably improve some criteria such as performance, cost and time. Finally, the paper proposes a framework which reflects the best practices and achievements in the field of IoT risk management.",
"title": ""
},
{
"docid": "neg:1840428_7",
"text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet",
"title": ""
},
{
"docid": "neg:1840428_8",
"text": "In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D [1] or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multimodal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35%. After model ensemble, we achieve top-1 precision as high as 85.0% on the validation set and rank No.1 among all submissions.",
"title": ""
},
{
"docid": "neg:1840428_9",
"text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.",
"title": ""
},
{
"docid": "neg:1840428_10",
"text": "We present a highly flexible and efficient software pipeline for programmable triangle voxelization. The pipeline, entirely written in CUDA, supports both fully conservative and thin voxelizations, multiple boolean, floating point, vector-typed render targets, user-defined vertex and fragment shaders, and a bucketing mode which can be used to generate 3D A-buffers containing the entire list of fragments belonging to each voxel. For maximum efficiency, voxelization is implemented as a sort-middle tile-based rasterizer, while the A-buffer mode, essentially performing 3D binning of triangles over uniform grids, uses a sort-last pipeline. Despite its major flexibility, the performance of our tile-based rasterizer is always competitive with and sometimes more than an order of magnitude superior to that of state-of-the-art binary voxelizers, whereas our bucketing system is up to 4 times faster than previous implementations. In both cases the results have been achieved through the use of careful load-balancing and high performance sorting primitives.",
"title": ""
},
{
"docid": "neg:1840428_11",
"text": "This paper addresses the task of assigning multiple labels of fine-grained named entity (NE) types to Wikipedia articles. To address the sparseness of the input feature space, which is salient particularly in fine-grained type classification, we propose to learn article vectors (i.e. entity embeddings) from hypertext structure of Wikipedia using a Skip-gram model and incorporate them into the input feature set. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled instances. The results of our experiments show that our idea gained statistically significant improvements in classification results.",
"title": ""
},
{
"docid": "neg:1840428_12",
"text": "Title of Thesis: SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM Degree candidate: Phillip Yip Degree and year: Master of Science, 2004 Thesis directed by: Assistant Professor Dimitrios Hristu-Varsakelis Department of Mechanical Engineering Modern control systems often consist of networks of components that must share a common communication channel. Not all components of the networked control system can communicate with one another simultaneously at any given time. The “attention” that each component receives is an important factor that affects the system’s overall performance. An effective controller should ensure that sensors and actuators receive sufficient attention. This thesis describes a “ball-on-plate” dynamical system that includes a digital controller, which communicates with a pair of language-driven actuators, and an overhead camera. A control algorithm was developed to restrict the ball to a small region on the plate using a quantized set of language-based commands. The size of this containment region was analytically determined as a function of the communication constraints and other control system parameters. The effectiveness of the proposed control law was evaluated in experiments and mathematical simulations. SYMBOL-BASED CONTROL OF A BALL-ON-PLATE MECHANICAL SYSTEM by Phillip Yip Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2004 Advisory Commmittee: Assistant Professor Dimitrios Hristu-Varsakelis, Chair/Advisor Professor Balakumar Balachandran Professor Amr Baz c ©Copyright by Phillip T. Yip 2004 DEDICATION: To my family",
"title": ""
},
{
"docid": "neg:1840428_13",
"text": "This paper explores a monetary policy model with habit formation for consumers, in which consumers’ utility depends in part on current consumption relative to past consumption. The empirical tests developed in the paper show that one can reject the hypothesis of no habit formation with tremendous confidence, largely because the habit formation model captures the gradual hump-shaped response of real spending to various shocks. The paper then embeds the habit consumption specification in a monetary policy model and finds that the responses of both spending and inflation to monetary policy actions are significantly improved by this modification. (JEL D12, E52, E43) Forthcoming, American Economic Review, June 2000. With the resurgence of interest in the effects of monetary policy on the macroeconomy, led by the work of the Christina D. and David H. Romer (1989), Ben S. Bernanke and Alan S. Blinder (1992), Lawrence J. Christiano, Martin S. Eichenbaum, and Charles L. Evans (1996), and others, the need for a structural model that could plausibly be used for monetary policy analysis has become evident. Of course, many extant models have been used for monetary policy analysis, but many of these are perceived as having critical shortcomings. First, some models do not incorporate explicit expectations behavior, so that changes in policy (or private) behavior could cause shifts in reduced-form parameters (i.e., the critique of Robert E. Lucas 1976). Others incorporate expectations, but derive key relationships from ad hoc behavioral assumptions, rather than from explicit optimizing problems for consumers and firms (Fuhrer and George R. Moore 1995b is an example). Explicit expectations and optimizing behavior are both desirable, other things equal, for a model of monetary analysis. First, analyzing potential improvements to monetary policy relative to historical policies requires a model that is stable across alternative policy regimes. This underlines the importance of explicit expectations formation. Second, the “optimal” in optimal monetary policy must ultimately refer to social welfare. Many have approximated social welfare with weighted averages of output and inflation variances, but one cannot know how good these approximations are without more explicit modeling of welfare. This implies that the model be closely tied to the underlying objectives of consumers and firms, hence the emphasis on optimization-based models. A critical test for whether a model reflects underlying objectives is its ability to accurately reflect the dominant dynamic interactions in the data. A number of recent papers (see, for example, Robert G. King and Alexander L. Wolman (1996), Bennett T. McCallum and Edward Nelson (1999a, 1999b); Julio R. Rotemberg and Michael Woodford (1997)) have developed models that incorporate explicit expectations, optimizing behavior, and frictions that allow monetary policy to have real effects. This paper continues in that line of research by documenting the empirical importance of a key feature of aggregate data: the “hump-shaped,” gradual response of spending and inflation to shocks. It then develops a monetary policy model that can capture this feature, as well as all of the features (e.g. the real effects of monetary policy, the persistence of inflation and output) embodied in earlier models. The key to the model’s success on the spending side is the inclusion of habit formation in the consumer’s utility function. This modification",
"title": ""
},
{
"docid": "neg:1840428_14",
"text": "Generative adversarial networks (GANs) are a class of unsupervised machine learning algorithms that can produce realistic images from randomly-sampled vectors in a multi-dimensional space. Until recently, it was not possible to generate realistic high-resolution images using GANs, which has limited their applicability to medical images that contain biomarkers only detectable at native resolution. Progressive growing of GANs is an approach wherein an image generator is trained to initially synthesize low resolution synthetic images (8x8 pixels), which are then fed to a discriminator that distinguishes these synthetic images from real downsampled images. Additional convolutional layers are then iteratively introduced to produce images at twice the previous resolution until the desired resolution is reached. In this work, we demonstrate that this approach can produce realistic medical images in two different domains; fundus photographs exhibiting vascular pathology associated with retinopathy of prematurity (ROP), and multi-modal magnetic resonance images of glioma. We also show that fine-grained details associated with pathology, such as retinal vessels or tumor heterogeneity, can be preserved and enhanced by including segmentation maps as additional channels. We envisage several applications of the approach, including image augmentation and unsupervised classification of pathology.",
"title": ""
},
{
"docid": "neg:1840428_15",
"text": "This paper compares the attributes of 36 slot, 33 slot and 12 slot brushless interior permanent magnet motor designs, each with an identical 10 pole interior magnet rotor. The aim of the paper is to quantify the trade-offs between alternative distributed and concentrated winding configurations taking into account aspects such as thermal performance, field weakening behaviour, acoustic noise, and efficiency. It is found that the concentrated 12 slot design gives the highest theoretical performance however significant rotor losses are found during testing and a large amount of acoustic noise and vibration is generated. The 33 slot design is found to have marginally better performance than the 36 slot but it also generates some unbalanced magnetic pull on the rotor which may lead to mechanical issues at higher speeds.",
"title": ""
},
{
"docid": "neg:1840428_16",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "neg:1840428_17",
"text": "Management of bulk sensor data is one of the challenging problems in the development of Internet of Things (IoT) applications. High volume of sensor data induces for optimal implementation of appropriate sensor data compression technique to deal with the problem of energy-efficient transmission, storage space optimization for tiny sensor devices, and cost-effective sensor analytics. The compression performance to realize significant gain in processing high volume sensor data cannot be attained by conventional lossy compression methods, which are less likely to exploit the intrinsic unique contextual characteristics of sensor data. In this paper, we propose SensCompr, a dynamic lossy compression method specific for sensor datasets and it is easily realizable with standard compression methods. Senscompr leverages robust statistical and information theoretic techniques and does not require specific physical modeling. It is an information-centric approach that exhaustively analyzes the inherent properties of sensor data for extracting the embedded useful information content and accordingly adapts the parameters of compression scheme to maximize compression gain while optimizing information loss. Senscompr is successfully applied to compress large sets of heterogeneous real sensor datasets like ECG, EEG, smart meter, accelerometer. To the best of our knowledge, for the first time 'sensor information content'-centric dynamic compression technique is proposed and implemented particularly for IoT-applications and this method is independent to sensor data types.",
"title": ""
},
{
"docid": "neg:1840428_18",
"text": "In traditional cloud storage systems, attribute-based encryption (ABE) is regarded as an important technology for solving the problem of data privacy and fine-grained access control. However, in all ABE schemes, the private key generator has the ability to decrypt all data stored in the cloud server, which may bring serious problems such as key abuse and privacy data leakage. Meanwhile, the traditional cloud storage model runs in a centralized storage manner, so single point of failure may leads to the collapse of system. With the development of blockchain technology, decentralized storage mode has entered the public view. The decentralized storage approach can solve the problem of single point of failure in traditional cloud storage systems and enjoy a number of advantages over centralized storage, such as low price and high throughput. In this paper, we study the data storage and sharing scheme for decentralized storage systems and propose a framework that combines the decentralized storage system interplanetary file system, the Ethereum blockchain, and ABE technology. In this framework, the data owner has the ability to distribute secret key for data users and encrypt shared data by specifying access policy, and the scheme achieves fine-grained access control over data. At the same time, based on smart contract on the Ethereum blockchain, the keyword search function on the cipher text of the decentralized storage systems is implemented, which solves the problem that the cloud server may not return all of the results searched or return wrong results in the traditional cloud storage systems. Finally, we simulated the scheme in the Linux system and the Ethereum official test network Rinkeby, and the experimental results show that our scheme is feasible.",
"title": ""
},
{
"docid": "neg:1840428_19",
"text": "In this paper, we introduce an approach for recognizing the absence of opposing arguments in persuasive essays. We model this task as a binary document classification and show that adversative transitions in combination with unigrams and syntactic production rules significantly outperform a challenging heuristic baseline. Our approach yields an accuracy of 75.6% and 84% of human performance in a persuasive essay corpus with various topics.",
"title": ""
}
] |
1840429 | A Warning System for Obstacle Detection at Vehicle Lateral Blind Spot Area | [
{
"docid": "pos:1840429_0",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
}
] | [
{
"docid": "neg:1840429_0",
"text": "Because wireless sensor networks (WSNs) are becoming increasingly integrated into daily life, solving the energy efficiency problem of such networks is an urgent problem. Many energy-efficient algorithms have been proposed to reduce energy consumption in traditional WSNs. The emergence of software-defined networks (SDNs) enables the transformation of WSNs. Some SDN-based WSNs architectures have been proposed and energy-efficient algorithms in SDN-based WSNs architectures have been studied. In this paper, we integrate an SDN into WSNs and an improved software-defined WSNs (SD-WSNs) architecture is presented. Based on the improved SD-WSNs architecture, we propose an energy-efficient algorithm. This energy-efficient algorithm is designed to match the SD-WSNs architecture, and is based on the residual energy and the transmission power, and the game theory is introduced to extend the network lifetime. Based on the SD-WSNs architecture and the energy-efficient algorithm, we provide a detailed introduction to the operating mechanism of the algorithm in the SD-WSNs. The simulation results show that our proposed algorithm performs better in terms of balancing energy consumption and extending the network lifetime compared with the typical energy-efficient algorithms in traditional WSNs.",
"title": ""
},
{
"docid": "neg:1840429_1",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "neg:1840429_2",
"text": "This paper presents a method to extract tone relevant features based on pitch flux from continuous speech signal. The autocorrelations of two adjacent frames are calculated and the covariance between them is estimated to extract multi-dimensional pitch flux features. These features, together with MFCCs, are modeled in a 2-stream GMM models, and are tested in a 3-dialect identification task for Chinese. The pitch flux features have shown to be very effective in identifying tonal languages with short speech segments. For the test speech segments of 3 seconds, 2-stream model achieves more than 30% error reduction over MFCC-based model",
"title": ""
},
{
"docid": "neg:1840429_3",
"text": "The current state of the art in playing many important perfect information games, including Chess and Go, combines planning and deep reinforcement learning with self-play. We extend this approach to imperfect information games and present ExIt-OOS, a novel approach to playing imperfect information games within the Expert Iteration framework and inspired by AlphaZero. We use Online Outcome Sampling, an online search algorithm for imperfect information games in place of MCTS. While training online, our neural strategy is used to improve the accuracy of playouts in OOS, allowing a learning and planning feedback loop for imperfect information games.",
"title": ""
},
{
"docid": "neg:1840429_4",
"text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.",
"title": ""
},
{
"docid": "neg:1840429_5",
"text": "Single-cell RNA-Seq (scRNA-Seq) has attracted much attention recently because it allows unprecedented resolution into cellular activity; the technology, therefore, has been widely applied in studying cell heterogeneity such as the heterogeneity among embryonic cells at varied developmental stages or cells of different cancer types or subtypes. A pertinent question in such analyses is to identify cell subpopulations as well as their associated genetic drivers. Consequently, a multitude of approaches have been developed for clustering or biclustering analysis of scRNA-Seq data. In this article, we present a fast and simple iterative biclustering approach called \"BiSNN-Walk\" based on the existing SNN-Cliq algorithm. One of BiSNN-Walk's differentiating features is that it returns a ranked list of clusters, which may serve as an indicator of a cluster's reliability. Another important feature is that BiSNN-Walk ranks genes in a gene cluster according to their level of affiliation to the associated cell cluster, making the result more biologically interpretable. We also introduce an entropy-based measure for choosing a highly clusterable similarity matrix as our starting point among a wide selection to facilitate the efficient operation of our algorithm. We applied BiSNN-Walk to three large scRNA-Seq studies, where we demonstrated that BiSNN-Walk was able to retain and sometimes improve the cell clustering ability of SNN-Cliq. We were able to obtain biologically sensible gene clusters in terms of GO term enrichment. In addition, we saw that there was significant overlap in top characteristic genes for clusters corresponding to similar cell states, further demonstrating the fidelity of our gene clusters.",
"title": ""
},
{
"docid": "neg:1840429_6",
"text": "Traditional topic models do not account for semantic regularities in language. Recent distributional representations of words exhibit semantic consistency over directional metrics such as cosine similarity. However, neither categorical nor Gaussian observational distributions used in existing topic models are appropriate to leverage such correlations. In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere. Such a representation is well-suited for directional data. We use a Hierarchical Dirichlet Process for our base topic model and propose an efficient inference algorithm based on Stochastic Variational Inference. This model enables us to naturally exploit the semantic structures of word embeddings while flexibly discovering the number of topics. Experiments demonstrate that our method outperforms competitive approaches in terms of topic coherence on two different text corpora while offering efficient inference.",
"title": ""
},
{
"docid": "neg:1840429_7",
"text": "Accumulating evidence suggests that, independent of physical activity levels, sedentary behaviours are associated with increased risk of cardio-metabolic disease, all-cause mortality, and a variety of physiological and psychological problems. Therefore, the purpose of this systematic review is to determine the relationship between sedentary behaviour and health indicators in school-aged children and youth aged 5-17 years. Online databases (MEDLINE, EMBASE and PsycINFO), personal libraries and government documents were searched for relevant studies examining time spent engaging in sedentary behaviours and six specific health indicators (body composition, fitness, metabolic syndrome and cardiovascular disease, self-esteem, pro-social behaviour and academic achievement). 232 studies including 983,840 participants met inclusion criteria and were included in the review. Television (TV) watching was the most common measure of sedentary behaviour and body composition was the most common outcome measure. Qualitative analysis of all studies revealed a dose-response relation between increased sedentary behaviour and unfavourable health outcomes. Watching TV for more than 2 hours per day was associated with unfavourable body composition, decreased fitness, lowered scores for self-esteem and pro-social behaviour and decreased academic achievement. Meta-analysis was completed for randomized controlled studies that aimed to reduce sedentary time and reported change in body mass index (BMI) as their primary outcome. In this regard, a meta-analysis revealed an overall significant effect of -0.81 (95% CI of -1.44 to -0.17, p = 0.01) indicating an overall decrease in mean BMI associated with the interventions. There is a large body of evidence from all study designs which suggests that decreasing any type of sedentary time is associated with lower health risk in youth aged 5-17 years. In particular, the evidence suggests that daily TV viewing in excess of 2 hours is associated with reduced physical and psychosocial health, and that lowering sedentary time leads to reductions in BMI.",
"title": ""
},
{
"docid": "neg:1840429_8",
"text": "Although efforts have been directed toward the advancement of women in science, technology, engineering, and mathematics (STEM) positions, little research has directly examined women's perspectives and bottom-up strategies for advancing in male-stereotyped disciplines. The present study utilized Photovoice, a Participatory Action Research method, to identify themes that underlie women's experiences in traditionally male-dominated fields. Photovoice enables participants to convey unique aspects of their experiences via photographs and their in-depth knowledge of a community through personal narrative. Forty-six STEM women graduate students and postdoctoral fellows completed a Photovoice activity in small groups. They presented photographs that described their experiences pursuing leadership positions in STEM fields. Three types of narratives were discovered and classified: career strategies, barriers to achievement, and buffering strategies or methods for managing barriers. Participants described three common types of career strategies and motivational factors, including professional development, collaboration, and social impact. Moreover, the lack of rewards for these workplace activities was seen as limiting professional effectiveness. In terms of barriers to achievement, women indicated they were not recognized as authority figures and often worked to build legitimacy by fostering positive relationships. Women were vigilant to other people's perspectives, which was costly in terms of time and energy. To manage role expectations, including those related to gender, participants engaged in numerous role transitions throughout their day to accommodate workplace demands. To buffer barriers to achievement, participants found resiliency in feelings of accomplishment and recognition. Social support, particularly from mentors, helped participants cope with negative experiences and to envision their future within the field. Work-life balance also helped participants find meaning in their work and have a sense of control over their lives. Overall, common workplace challenges included a lack of social capital and limited degrees of freedom. Implications for organizational policy and future research are discussed.",
"title": ""
},
{
"docid": "neg:1840429_9",
"text": "It has recently been shown in a brain-computer interface experiment that motor cortical neurons change their tuning properties selectively to compensate for errors induced by displaced decoding parameters. In particular, it was shown that the three-dimensional tuning curves of neurons whose decoding parameters were reassigned changed more than those of neurons whose decoding parameters had not been reassigned. In this article, we propose a simple learning rule that can reproduce this effect. Our learning rule uses Hebbian weight updates driven by a global reward signal and neuronal noise. In contrast to most previously proposed learning rules, this approach does not require extrinsic information to separate noise from signal. The learning rule is able to optimize the performance of a model system within biologically realistic periods of time under high noise levels. Furthermore, when the model parameters are matched to data recorded during the brain-computer interface learning experiments described above, the model produces learning effects strikingly similar to those found in the experiments.",
"title": ""
},
{
"docid": "neg:1840429_10",
"text": "We propose a convolutional neural network architecture with k-max pooling layer for semantic modeling of music. The aim of a music model is to analyze and represent the semantic content of music for purposes of classification, discovery, or clustering. The k-max pooling layer is used in the network to make it possible to pool the k most active features, capturing the semantic-rich and time-varying information about music. Our network takes an input music as a sequence of audio words, where each audio word is associated with a distributed feature vector that can be fine-tuned by backpropagating errors during the training. The architecture allows us to take advantage of the better trained audio word embeddings and the deep structures to produce more robust music representations. Experiment results with two different music collections show that our neural networks achieved the best accuracy in music genre classification comparing with three state-of-art systems.",
"title": ""
},
{
"docid": "neg:1840429_11",
"text": "Qualitative research methodology has become an established part of the medical education research field. A very popular data-collection technique used in qualitative research is the \"focus group\". Focus groups in this Guide are defined as \"… group discussions organized to explore a specific set of issues … The group is focused in the sense that it involves some kind of collective activity … crucially, focus groups are distinguished from the broader category of group interview by the explicit use of the group interaction as research data\" (Kitzinger 1994, p. 103). This Guide has been designed to provide people who are interested in using focus groups with the information and tools to organize, conduct, analyze and publish sound focus group research within a broader understanding of the background and theoretical grounding of the focus group method. The Guide is organized as follows: Firstly, to describe the evolution of the focus group in the social sciences research domain. Secondly, to describe the paradigmatic fit of focus groups within qualitative research approaches in the field of medical education. After defining, the nature of focus groups and when, and when not, to use them, the Guide takes on a more practical approach, taking the reader through the various steps that need to be taken in conducting effective focus group research. Finally, the Guide finishes with practical hints towards writing up a focus group study for publication.",
"title": ""
},
{
"docid": "neg:1840429_12",
"text": "Brain endothelial cells are unique among endothelial cells in that they express apical junctional complexes, including tight junctions, which quite resemble epithelial tight junctions both structurally and functionally. They form the blood-brain-barrier (BBB) which strictly controls the exchanges between the blood and the brain compartments by limiting passive diffusion of blood-borne solutes while actively transporting nutrients to the brain. Accumulating experimental and clinical evidence indicate that BBB dysfunctions are associated with a number of serious CNS diseases with important social impacts, such as multiple sclerosis, stroke, brain tumors, epilepsy or Alzheimer's disease. This review will focus on the implication of brain endothelial tight junctions in BBB architecture and physiology, will discuss the consequences of BBB dysfunction in these CNS diseases and will present some therapeutic strategies for drug delivery to the brain across the BBB.",
"title": ""
},
{
"docid": "neg:1840429_13",
"text": "INTRODUCTION\nOpioid overdose fatality has increased threefold since 1999. As a result, prescription drug overdose surpassed motor vehicle collision as the leading cause of unintentional injury-related death in the USA. Naloxone , an opioid antagonist that has been available for decades, can safely reverse opioid overdose if used promptly and correctly. However, clinicians often overestimate the dose of naloxone needed to achieve the desired clinical outcome, precipitating acute opioid withdrawal syndrome (OWS).\n\n\nAREAS COVERED\nThis article provides a comprehensive review of naloxone's pharmacologic properties and its clinical application to promote the safe use of naloxone in acute management of opioid intoxication and to mitigate the risk of precipitated OWS. Available clinical data on opioid-receptor kinetics that influence the reversal of opioid agonism by naloxone are discussed. Additionally, the legal and social barriers to take home naloxone programs are addressed.\n\n\nEXPERT OPINION\nNaloxone is an intrinsically safe drug, and may be administered in large doses with minimal clinical effect in non-opioid-dependent patients. However, when administered to opioid-dependent patients, naloxone can result in acute opioid withdrawal. Therefore, it is prudent to use low-dose naloxone (0.04 mg) with appropriate titration to reverse ventilatory depression in this population.",
"title": ""
},
{
"docid": "neg:1840429_14",
"text": "Self-mutilating behaviors could be minor and benign, but more severe cases are usually associated with psychiatric disorders or with acquired nervous system lesions and could be life-threatening. The patient was a 66-year-old man who had been mutilating his fingers for 6 years. This behavior started as serious nail biting and continued as severe finger mutilation (by biting), resulting in loss of the terminal phalanges of all fingers in both hands. On admission, he complained only about insomnia. The electromyography showed severe peripheral nerve damage in both hands and feet caused by severe diabetic neuropathy. Cognitive decline was not established (Mini Mental State Examination score, 28), although the computed tomographic scan revealed serious brain atrophy. He was given a diagnosis of impulse control disorder not otherwise specified. His impulsive biting improved markedly when low doses of haloperidol (1.5 mg/day) were added to fluoxetine (80 mg/day). In our patient's case, self-mutilating behavior was associated with severe diabetic neuropathy, impulsivity, and social isolation. The administration of a combination of an antipsychotic and an antidepressant proved to be beneficial.",
"title": ""
},
{
"docid": "neg:1840429_15",
"text": "In this paper, we tackle the problem of associating combinations of colors to abstract categories (e.g. capricious, classic, cool, delicate, etc.). It is evident that such concepts would be difficult to distinguish using single colors, therefore we consider combinations of colors or color palettes. We leverage two novel databases for color palettes and we learn categorization models using low and high level descriptors. Preliminary results show that Fisher representation based on GMMs is the most rewarding strategy in terms of classification performance over a baseline model. We also suggest a process for cleaning weakly annotated data, whilst preserving the visual coherence of categories. Finally, we demonstrate how learning abstract categories on color palettes can be used in the application of color transfer, personalization and image re-ranking.",
"title": ""
},
{
"docid": "neg:1840429_16",
"text": "A continuing question in neural net research is the size of network needed to solve a particular problem. If training is started with too small a network for the problem no learning can occur. The researcher must then go through a slow process of deciding that no learning is taking place, increasing the size of the network and training again. If a network that is larger than required is used, then processing is slowed, particularly on a conventional von Neumann computer. An approach to this problem is discussed that is based on learning with a net which is larger than the minimum size network required to solve the problem and then pruning the solution network. The result is a small, efficient network that performs as well or better than the original which does not give a complete answer to the question, since the size of the initial network is still largely based on guesswork but it gives a very useful partial answer and sheds some light on the workings of a neural network in the process.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840429_17",
"text": "We propose a new method for learning from a single demonstration to solve hard exploration tasks like the Atari game Montezuma’s Revenge. Instead of imitating human demonstrations, as proposed in other recent works, our approach is to maximize rewards directly. Our agent is trained using off-the-shelf reinforcement learning, but starts every episode by resetting to a state from a demonstration. By starting from such demonstration states, the agent requires much less exploration to learn a game compared to when it starts from the beginning of the game at every episode. We analyze reinforcement learning for tasks with sparse rewards in a simple toy environment, where we show that the run-time of standard RL methods scales exponentially in the number of states between rewards. Our method reduces this to quadratic scaling, opening up many tasks that were previously infeasible. We then apply our method to Montezuma’s Revenge, for which we present a trained agent achieving a high-score of 74,500, better than any previously published result.",
"title": ""
},
{
"docid": "neg:1840429_18",
"text": "Home security should be a top concern for everyone who owns or rents a home. Moreover, safe and secure residential space is the necessity of every individual as most of the family members are working. The home is left unattended for most of the day-time and home invasion crimes are at its peak as constantly monitoring of the home is difficult. Another reason for the need of home safety is specifically when the elderly person is alone or the kids are with baby-sitter and servant. Home security system i.e. HomeOS is thus applicable and desirable for resident’s safety and convenience. This will be achieved by turning your home into a smart home by intelligent remote monitoring. Smart home comes into picture for the purpose of controlling and monitoring the home. It will give you peace of mind, as you can have a close watch and stay connected anytime, anywhere. But, is common man really concerned about home security? An investigative study was done by conducting a survey to get the inputs from different people from diverse backgrounds. The main motivation behind this survey was to make people aware of advanced HomeOS and analyze their need for security. This paper also studied the necessity of HomeOS investigative study in current situation where the home burglaries are rising at an exponential rate. In order to arrive at findings and conclusions, data were analyzed. The graphical method was employed to identify the relative significance of home security. From this analysis, we can infer that the cases of having kids and aged person at home or location of home contribute significantly to the need of advanced home security system. At the end, the proposed system model with its flow and the challenges faced while implementing home security systems are also discussed.",
"title": ""
},
{
"docid": "neg:1840429_19",
"text": "Water is a naturally circulating resource that is constantly recharged. Therefore, even though the stocks of water in natural and artificial reservoirs are helpful to increase the available water resources for human society, the flow of water should be the main focus in water resources assessments. The climate system puts an upper limit on the circulation rate of available renewable freshwater resources (RFWR). Although current global withdrawals are well below the upper limit, more than two billion people live in highly water-stressed areas because of the uneven distribution of RFWR in time and space. Climate change is expected to accelerate water cycles and thereby increase the available RFWR. This would slow down the increase of people living under water stress; however, changes in seasonal patterns and increasing probability of extreme events may offset this effect. Reducing current vulnerability will be the first step to prepare for such anticipated changes.",
"title": ""
}
] |
1840430 | Bioinformatics - an introduction for computer scientists | [
{
"docid": "pos:1840430_0",
"text": "MOTIVATION\nIn a previous paper, we presented a polynomial time dynamic programming algorithm for predicting optimal RNA secondary structure including pseudoknots. However, a formal grammatical representation for RNA secondary structure with pseudoknots was still lacking.\n\n\nRESULTS\nHere we show a one-to-one correspondence between that algorithm and a formal transformational grammar. This grammar class encompasses the context-free grammars and goes beyond to generate pseudoknotted structures. The pseudoknot grammar avoids the use of general context-sensitive rules by introducing a small number of auxiliary symbols used to reorder the strings generated by an otherwise context-free grammar. This formal representation of the residue correlations in RNA structure is important because it means we can build full probabilistic models of RNA secondary structure, including pseudoknots, and use them to optimally parse sequences in polynomial time.",
"title": ""
}
] | [
{
"docid": "neg:1840430_0",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "neg:1840430_1",
"text": "With single computer's computation power not sufficing, need for sharing resources to manipulate and manage data through clouds is increasing rapidly. Hence, it is favorable to delegate computations or store data with a third party, the cloud provider. However, delegating data to third party poses the risks of data disclosure during computation. The problem can be addressed by carrying out computation without decrypting the encrypted data. The results are also obtained encrypted and can be decrypted at the user side. This requires modifying functions in such a way that they are still executable while privacy is ensured or to search an encrypted database. Homomorphic encryption provides security to cloud consumer data while preserving system usability. We propose a symmetric key homomorphic encryption scheme based on matrix operations with primitives that make it easily adaptable for different needs in various cloud computing scenarios.",
"title": ""
},
{
"docid": "neg:1840430_2",
"text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.",
"title": ""
},
{
"docid": "neg:1840430_3",
"text": "We present an algorithm that enables casual 3D photography. Given a set of input photos captured with a hand-held cell phone or DSLR camera, our algorithm reconstructs a 3D photo, a central panoramic, textured, normal mapped, multi-layered geometric mesh representation. 3D photos can be stored compactly and are optimized for being rendered from viewpoints that are near the capture viewpoints. They can be rendered using a standard rasterization pipeline to produce perspective views with motion parallax. When viewed in VR, 3D photos provide geometrically consistent views for both eyes. Our geometric representation also allows interacting with the scene using 3D geometry-aware effects, such as adding new objects to the scene and artistic lighting effects.\n Our 3D photo reconstruction algorithm starts with a standard structure from motion and multi-view stereo reconstruction of the scene. The dense stereo reconstruction is made robust to the imperfect capture conditions using a novel near envelope cost volume prior that discards erroneous near depth hypotheses. We propose a novel parallax-tolerant stitching algorithm that warps the depth maps into the central panorama and stitches two color-and-depth panoramas for the front and back scene surfaces. The two panoramas are fused into a single non-redundant, well-connected geometric mesh. We provide videos demonstrating users interactively viewing and manipulating our 3D photos.",
"title": ""
},
{
"docid": "neg:1840430_4",
"text": "In many applications, the training data, from which one need s to learn a classifier, is corrupted with label noise. Many st andard algorithms such as SVM perform poorly in presence of label no ise. In this paper we investigate the robustness of risk mini ization to label noise. We prove a sufficient condition on a loss funct io for the risk minimization under that loss to be tolerant t o uniform label noise. We show that the 0 − 1 loss, sigmoid loss, ramp loss and probit loss satisfy this c ondition though none of the standard convex loss functions satisfy it. We also prove that, by choo sing a sufficiently large value of a parameter in the loss func tio , the sigmoid loss, ramp loss and probit loss can be made tolerant t o non-uniform label noise also if we can assume the classes to be separable under noise-free data distribution. Through ext ensive empirical studies, we show that risk minimization un der the 0− 1 loss, the sigmoid loss and the ramp loss has much better robus tness to label noise when compared to the SVM algorithm.",
"title": ""
},
{
"docid": "neg:1840430_5",
"text": "Wegener’s granulomatosis (WG) is an autoimmune disease, which particularly affects the upper respiratory pathways, lungs and kidney. Oral mucosal involvement presents in around 5%--10% of cases and may be the first disease symptom. Predominant manifestation is granulomatous gingivitis erythematous papules; mucosal necrosis and non-specific ulcers with or without impact on adjacent structures. Clinically speaking, the most characteristic lesion presents as a gingival hyperplasia of the gum, with hyperaemia and petechias on its surface which bleed when touched. Due to its appearance, it has been called ‘‘Strawberry gingiva’’. The following is a clinical case in which the granulomatous strawberry gingivitis was the first sign of WG.",
"title": ""
},
{
"docid": "neg:1840430_6",
"text": "An economic evaluation of a hybrid wind/photovoltaic/fuel cell generation system for a typical home in the Pacific Northwest is performed. In this configuration the combination of a fuel cell stack, an electrolyzer, and a hydrogen storage tank is used for the energy storage system. This system is compared to a traditional hybrid energy system with battery storage. A computer program has been developed to size system components in order to match the load of the site in the most cost effective way. A cost of electricity and an overall system cost are also calculated for each configuration. The study was performed using a graphical user interface programmed in MATLAB.",
"title": ""
},
{
"docid": "neg:1840430_7",
"text": "AMS subject classifications: primary 62G10 secondary 62H20 Keywords: dCor dCov Multivariate independence Distance covariance Distance correlation High dimension a b s t r a c t Distance correlation is extended to the problem of testing the independence of random vectors in high dimension. Distance correlation characterizes independence and determines a test of multivariate independence for random vectors in arbitrary dimension. In this work, a modified distance correlation statistic is proposed, such that under independence the distribution of a transformation of the statistic converges to Student t, as dimension tends to infinity. Thus we obtain a distance correlation t-test for independence of random vectors in arbitrarily high dimension, applicable under standard conditions on the coordinates that ensure the validity of certain limit theorems. This new test is based on an unbiased es-timator of distance covariance, and the resulting t-test is unbiased for every sample size greater than three and all significance levels. The transformed statistic is approximately normal under independence for sample size greater than nine, providing an informative sample coefficient that is easily interpretable for high dimensional data. 1. Introduction Many applications in genomics, medicine, engineering, etc. require analysis of high dimensional data. Time series data can also be viewed as high dimensional data. Objects can be represented by their characteristics or features as vectors p. In this work, we consider the extension of distance correlation to the problem of testing independence of random vectors in arbitrarily high, not necessarily equal dimensions, so the dimension p of the feature space of a random vector is typically large. measure all types of dependence between random vectors in arbitrary, not necessarily equal dimensions. (See Section 2 for definitions.) Distance correlation takes values in [0, 1] and is equal to zero if and only if independence holds. It is more general than the classical Pearson product moment correlation, providing a scalar measure of multivariate independence that characterizes independence of random vectors. The distance covariance test of independence is consistent against all dependent alternatives with finite second moments. In practice, however, researchers are often interested in interpreting the numerical value of distance correlation, without a formal test. For example, given an array of distance correlation statistics, what can one learn about the strength of dependence relations from the dCor statistics without a formal test? This is in fact, a difficult question, but a solution is finally available for a large class of problems. The …",
"title": ""
},
{
"docid": "neg:1840430_8",
"text": "This paper presents a 64-times interleaved 2.6 GS/s 10b successive-approximation-register (SAR) ADC in 65 nm CMOS. The ADC combines interleaving hierarchy with an open-loop buffer array operated in feedforward-sampling and feedback-SAR mode. The sampling front-end consists of four interleaved T/Hs at 650 MS/s that are optimized for timing accuracy and sampling linearity, while the back-end consists of four ADC arrays, each consisting of 16 10b current-mode non-binary SAR ADCs. The interleaving hierarchy allows for many ADCs to be used per T/H and eliminates distortion stemming from open loop buffers interfacing between the front-end and back-end. Startup on-chip calibration deals with offset and gain mismatches as well as DAC linearity. Measurements show that the prototype ADC achieves an SNDR of 48.5 dB and a THD of less than 58 dB at Nyquist with an input signal of 1.4 . An estimated sampling clock skew spread of 400 fs is achieved by careful design and layout. Up to 4 GHz an SNR of more than 49 dB has been measured, enabled by the less than 110 fs rms clock jitter. The ADC consumes 480 mW from 1.2/1.3/1.6 V supplies and occupies an area of 5.1 mm.",
"title": ""
},
{
"docid": "neg:1840430_9",
"text": "Bike sharing systems (BSSs) have become common in many cities worldwide, providing a new transportation mode for residents' commutes. However, the management of these systems gives rise to many problems. As the bike pick-up demands at different places are unbalanced at times, the systems have to be rebalanced frequently. Rebalancing the bike availability effectively, however, is very challenging as it demands accurate prediction for inventory target level determination. In this work, we propose two types of regression models using multi-source data to predict the hourly bike pick-up demand at cluster level: Similarity Weighted K-Nearest-Neighbor (SWK) based regression and Artificial Neural Network (ANN). SWK-based regression models learn the weights of several meteorological factors and/or taxi usage and use the correlation between consecutive time slots to predict the bike pick-up demand. The ANN is trained by using historical trip records of BSS, meteorological data, and taxi trip records. Our proposed methods are tested with real data from a New York City BSS: Citi Bike NYC. Performance comparison between SWK-based and ANN-based methods is provided. Experimental results indicate the high accuracy of ANN-based prediction for bike pick-up demand using multisource data.",
"title": ""
},
{
"docid": "neg:1840430_10",
"text": "There are many excellent toolkits which provide support for developing machine learning software in Python, R, Matlab, and similar environments. Dlib-m l is an open source library, targeted at both engineers and research scientists, which aims to pro vide a similarly rich environment for developing machine learning software in the C++ language. T owards this end, dlib-ml contains an extensible linear algebra toolkit with built in BLAS supp ort. It also houses implementations of algorithms for performing inference in Bayesian networks a nd kernel-based methods for classification, regression, clustering, anomaly detection, and fe atur ranking. To enable easy use of these tools, the entire library has been developed with contract p rogramming, which provides complete and precise documentation as well as powerful debugging too ls.",
"title": ""
},
{
"docid": "neg:1840430_11",
"text": "Research into online gaming has steadily increased over the last decade, although relatively little research has examined the relationship between online gaming addiction and personality factors. This study examined the relationship between a number of personality traits (sensation seeking, self-control, aggression, neuroticism, state anxiety, and trait anxiety) and online gaming addiction. Data were collected over a 1-month period using an opportunity sample of 123 university students at an East Midlands university in the United Kingdom. Gamers completed all the online questionnaires. Results of a multiple linear regression indicated that five traits (neuroticism, sensation seeking, trait anxiety, state anxiety, and aggression) displayed significant associations with online gaming addiction. The study suggests that certain personality traits may be important in the acquisition, development, and maintenance of online gaming addiction, although further research is needed to replicate the findings of the present study.",
"title": ""
},
{
"docid": "neg:1840430_12",
"text": "This work presents a low-power low-cost CDR design for RapidIO SerDes. The design is based on phase interpolator, which is controlled by a synthesized standard cell digital block. Half-rate architecture is adopted to lessen the problems in routing high speed clocks and reduce power. An improved half-rate bang-bang phase detector is presented to assure the stability of the system. Moreover, the paper proposes a simplified control scheme for the phase interpolator to further reduce power and cost. The CDR takes an area of less than 0.05mm2, and post simulation shows that the CDR has a RMS jitter of UIpp/32 (11.4ps@3.125GBaud) and consumes 9.5mW at 3.125GBaud.",
"title": ""
},
{
"docid": "neg:1840430_13",
"text": "A prototype campus bus tacking system is designed and implemented for helping UiTM Student to pinpoint the location and estimate arrival time of their respective desired bus via their smartphone application. This project comprises integration between hardware and software. An Arduino UNO is used to control the GPS module to get the geographic coordinates. An android smartphone application using App Inventor is also developed for the user not only to determine the time for the campus bus to arrive and also will be able to get the bus information. This friendly user system is named as \"UiTM Bus Checker\" application. The user also will be able to view position of the bus on a digital mapping from Google Maps using their smartphone application and webpage. In order to show the effectiveness of this UiTM campus bus tracking system, the practical implementations have been presented and recorded.",
"title": ""
},
{
"docid": "neg:1840430_14",
"text": "Based on phasor measurement units (PMUs), a synchronphasor system is widely recognized as a promising smart grid measurement system. It is able to provide high-frequency, high-accuracy phasor measurements sampling for Wide Area Monitoring and Control (WAMC) applications.However,the high sampling frequency of measurement data under strict latency constraints introduces new challenges for real time communication. It would be very helpful if the collected data can be prioritized according to its importance such that the existing quality of service (QoS) mechanisms in the communication networks can be leveraged. To achieve this goal, certain anomaly detection functions should be conducted by the PMUs. Inspired by the recent emerging edge-fog-cloud computing hierarchical architecture, which allows computing tasks to be conducted at the network edge, a novel PMU fog is proposed in this paper. Two anomaly detection approaches, Singular Spectrum Analysis (SSA) and K-Nearest Neighbors (KNN), are evaluated in the PMU fog using the IEEE 16-machine 68-bus system. The simulation experiments based on Riverbed Modeler demonstrate that the proposed PMU fog can effectively reduce the data flow end-to-end (ETE) delay without sacrificing data completeness.",
"title": ""
},
{
"docid": "neg:1840430_15",
"text": "Correlated topic modeling has been limited to small model and problem sizes due to their high computational cost and poor scaling. In this paper, we propose a new model which learns compact topic embeddings and captures topic correlations through the closeness between the topic vectors. Our method enables efficient inference in the low-dimensional embedding space, reducing previous cubic or quadratic time complexity to linear w.r.t the topic size. We further speedup variational inference with a fast sampler to exploit sparsity of topic occurrence. Extensive experiments show that our approach is capable of handling model and data scales which are several orders of magnitude larger than existing correlation results, without sacrificing modeling quality by providing competitive or superior performance in document classification and retrieval.",
"title": ""
},
{
"docid": "neg:1840430_16",
"text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.",
"title": ""
},
{
"docid": "neg:1840430_17",
"text": "BACKGROUND\nJehovah's Witness patients (Witnesses) who undergo cardiac surgery provide a unique natural experiment in severe blood conservation because anemia, transfusion, erythropoietin, and antifibrinolytics have attendant risks. Our objective was to compare morbidity and long-term survival of Witnesses undergoing cardiac surgery with a similarly matched group of patients who received transfusions.\n\n\nMETHODS\nA total of 322 Witnesses and 87 453 non-Witnesses underwent cardiac surgery at our center from January 1, 1983, to January 1, 2011. All Witnesses prospectively refused blood transfusions. Among non-Witnesses, 38 467 did not receive blood transfusions and 48 986 did. We used propensity methods to match patient groups and parametric multiphase hazard methods to assess long-term survival. Our main outcome measures were postoperative morbidity complications, in-hospital mortality, and long-term survival.\n\n\nRESULTS\nWitnesses had fewer acute complications and shorter length of stay than matched patients who received transfusions: myocardial infarction, 0.31% vs 2.8% (P = . 01); additional operation for bleeding, 3.7% vs 7.1% (P = . 03); prolonged ventilation, 6% vs 16% (P < . 001); intensive care unit length of stay (15th, 50th, and 85th percentiles), 24, 25, and 72 vs 24, 48, and 162 hours (P < . 001); and hospital length of stay (15th, 50th, and 85th percentiles), 5, 7, and 11 vs 6, 8, and 16 days (P < . 001). Witnesses had better 1-year survival (95%; 95% CI, 93%-96%; vs 89%; 95% CI, 87%-90%; P = . 007) but similar 20-year survival (34%; 95% CI, 31%-38%; vs 32% 95% CI, 28%-35%; P = . 90).\n\n\nCONCLUSIONS\nWitnesses do not appear to be at increased risk for surgical complications or long-term mortality when comparisons are properly made by transfusion status. Thus, current extreme blood management strategies do not appear to place patients at heightened risk for reduced long-term survival.",
"title": ""
},
{
"docid": "neg:1840430_18",
"text": "In this paper we aim to formally explain the phenomenon of fast convergence of Stochastic Gradient Descent (SGD) observed in modern machine learning. The key observation is that most modern learning architectures are over-parametrized and are trained to interpolate the data by driving the empirical loss (classification and regression) close to zero. While it is still unclear why these interpolated solutions perform well on test data, we show that these regimes allow for fast convergence of SGD, comparable in number of iterations to full gradient descent. For convex loss functions we obtain an exponential convergence bound for mini-batch SGD parallel to that for full gradient descent. We show that there is a critical batch size m∗ such that: (a) SGD iteration with mini-batch sizem ≤ m∗ is nearly equivalent to m iterations of mini-batch size 1 (linear scaling regime). (b) SGD iteration with mini-batch m > m∗ is nearly equivalent to a full gradient descent iteration (saturation regime). Moreover, for the quadratic loss, we derive explicit expressions for the optimal mini-batch and step size and explicitly characterize the two regimes above. The critical mini-batch size can be viewed as the limit for effective mini-batch parallelization. It is also nearly independent of the data size, implying O(n) acceleration over GD per unit of computation. We give experimental evidence on real data which closely follows our theoretical analyses. Finally, we show how our results fit in the recent developments in training deep neural networks and discuss connections to adaptive rates for SGD and variance reduction. † See full version of this paper at arxiv.org/abs/1712.06559. Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA. Correspondence to: Siyuan Ma <masi@cse.ohio-state.edu>, Raef Bassily <bassily.1@osu.edu>, Mikhail Belkin <mbelkin@cse.ohio-state.edu>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s).",
"title": ""
},
{
"docid": "neg:1840430_19",
"text": "In this paper we present an effective method for developing realistic numerical three-dimensional (3-D) microwave breast models of different shape, size, and tissue density. These models are especially convenient for microwave breast cancer imaging applications and numerical analysis of human breast-microwave interactions. As in the recent studies on this area, anatomical information of the breast tissue is collected from T1-weighted 3-D MRI data of different patients' in prone position. The method presented in this paper offers significant improvements including efficient noise reduction and tissue segmentation, nonlinear mapping of electromagnetic properties, realistically asymmetric phantom shape, and a realistic classification of breast phantoms. Our method contains a five-step approach where each MRI voxel is classified and mapped to the appropriate dielectric properties. In the first step, the MRI data are denoised by estimating and removing the bias field from each slice, after which the voxels are segmented into two main tissues as fibro-glandular and adipose. Using the distribution of the voxel intensities in MRI histogram, two nonlinear mapping functions are generated for dielectric permittivity and conductivity profiles, which allow each MRI voxel to map to its proper dielectric properties. Obtained dielectric profiles are then converted into 3-D numerical breast phantoms using several image processing techniques, including morphologic operations, filtering. Resultant phantoms are classified according to their adipose content, which is a critical parameter that affects penetration depth during microwave breast imaging.",
"title": ""
}
] |
1840431 | A subject identification method based on term frequency technique | [
{
"docid": "pos:1840431_0",
"text": "For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.",
"title": ""
},
{
"docid": "pos:1840431_1",
"text": "This paper presents the outcomes of research into using lingual parts of music in an automatic mood classification system. Using a collection of lyrics and corresponding user-tagged moods, we build classifiers that classify lyrics of songs into moods. By comparing the performance of different mood frameworks (or dimensions), we examine to what extent the linguistic part of music reveals adequate information for assigning a mood category and which aspects of mood can be classified best. Our results show that word oriented metrics provide a valuable source of information for automatic mood classification of music, based on lyrics only. Metrics such as term frequencies and tf*idf values are used to measure relevance of words to the different mood classes. These metrics are incorporated in a machine learning classifier setup. Different partitions of the mood plane are investigated and we show that there is no large difference in mood prediction based on the mood division. Predictions on the valence, tension and combinations of aspects lead to similar performance.",
"title": ""
}
] | [
{
"docid": "neg:1840431_0",
"text": "Obese white adipose tissue (AT) is characterized by large-scale infiltration of proinflammatory macrophages, in parallel with systemic insulin resistance; however, the cellular stimulus that initiates this signaling cascade and chemokine release is still unknown. The objective of this study was to determine the role of the phosphoinositide 3-kinase (PI3K) regulatory subunits on AT macrophage (ATM) infiltration in obesity. Here, we find that the Pik3r1 regulatory subunits (i.e., p85a/p55a/p50a) are highly induced in AT from high-fat diet–fed obese mice, concurrent with insulin resistance. Global heterozygous deletion of the Pik3r1 regulatory subunits (aHZ), but not knockout of Pik3r2 (p85b), preserves whole-body, AT, and skeletal muscle insulin sensitivity, despite severe obesity. Moreover, ATM accumulation, proinflammatory gene expression, and ex vivo chemokine secretion in obese aHZ mice are markedly reduced despite endoplasmic reticulum (ER) stress, hypoxia, adipocyte hypertrophy, and Jun NH2-terminal kinase activation. Furthermore, bone marrow transplant studies reveal that these improvements in obese aHZ mice are independent of reduced Pik3r1 expression in the hematopoietic compartment. Taken together, these studies demonstrate that Pik3r1 expression plays a critical role in mediating AT insulin sensitivity and, more so, suggest that reduced PI3K activity is a key step in the initiation and propagation of the inflammatory response in obese AT.",
"title": ""
},
{
"docid": "neg:1840431_1",
"text": "Human activities such as international trade and travel promote biological invasions by accidentally or deliberately dispersing species outside their native biogeographical ranges (Lockwood, 2005; Alpert, 2006). Invasive species are now viewed as a significant component of global change and have become a serious threat to natural communities (Mack et al., 2000; Pyšek & Richardson, 2010). The ecological impact of invasive species has been observed in all types of ecosystems. Typically, invaders can change the niches of co-occurring species, alter the structure and function of ecosystems by degrading native communities and disrupt evolutionary processes through anthropogenic movement of species across physical and geographical barriers (D’Antonio & Vitousek, 1992; Mack et al., 2000; Richardson et al., 2000; Levine et al., 2003; Vitousek et al., 2011). Concerns for the implications and consequences of successful invasions have stimulated a considerable amount of research. Recent invasion research ranges from the developing testable hypotheses aimed at understanding the mechanisms of invasion to providing guidelines for control and management of invasive species. Several recent studies have used hyperspectral remote sensing (Underwood et al., 2003; Lass et al., 2005; Underwood Department of Biological Sciences, Murray State University, Murray, KY 42071, USA, Fondazione Edmund Mach, Research and Innovation Centre, Department of Biodiversity and Molecular Ecology, GIS and Remote Sensing Unit, Via E. Mach 1, 38010 S. Michele all’Adige, TN, Italy, Center for the Study of Institutions, Population, and Environmental Change, Indiana University, 408 N. Indiana Avenue, Bloomington, IN 47408, USA, Ashoka Trust for Research in Ecology and the Environment (ATREE), Royal Enclave, Srirampura, Jakkur Post, Bangalore 560064, India",
"title": ""
},
{
"docid": "neg:1840431_2",
"text": "Over the years, we have harnessed the power of computing to improve the speed of operations and increase in productivity. Also, we have witnessed the merging of computing and telecommunications. This excellent combination of two important fields has propelled our capability even further, allowing us to communicate anytime and anywhere, improving our work flow and increasing our quality of life tremendously. The next wave of evolution we foresee is the convergence of telecommunication, computing, wireless, and transportation technologies. Once this happens, our roads and highways will be both our communications and transportation platforms, which will completely revolutionize when and how we access services and entertainment, how we communicate, commute, navigate, etc., in the coming future. This paper presents an overview of the current state-of-the-art, discusses current projects, their goals, and finally highlights how emergency services and road safety will evolve with the blending of vehicular communication networks with road transportation.",
"title": ""
},
{
"docid": "neg:1840431_3",
"text": "A procedure is described whereby a computer can determine whether two fingerpring impressions were made by the same finger. The procedure used the X and Y coordinates and the individual directions of the minutiae (ridge endings and bifurcations). The identity of two impressions is established by computing the density of clusters of points in AX and AY space where AX and AY are the differences in coordinates that are found in going from one of the fingerpring impressions to the other. Single fingerpring classification is discussed and experimental results using machine-read minutiae data are given. References: J. H. Wegstein, NBS Technical Notes 538 and 730. ~7 Information Processing for Radar Target Detection andClassification. A. KSIENSKI and L. WHITE, Ohio State-Previous research has demonstrated the feasibility of using multiple low-frequency radar returns for target classification. Simple object shapes have been successfully classified by such techniques, but aircraft data poses greater difficulty, as in general such data are not linearly separable. A misclassification error analysis is provided for aircraft data using k-nearest neighbor algorithms. Another recognition scheme involves the use of a bilinear fit of aircraft data; a misclassification error analysis is being prepared for this technique and will be reported. ~ A Parallel Machine for Silhouette Pre-Processing. PAUL NAHIN, Harvey Mudd-The concept of slope density is introduced as a descriptor of silhouettes. The mechanism of a parallel machine that extracts an approximation to the slope denisty is presented. The machine has been built by Aero-Jet, but because of its complexity, a digital simulation program has been developed I. The effect of sample and hold filtering on the machine output has been investigated, both theoretically, and via simulation. The design of a medical cell analyzer (i.e., marrow granulocyte precursor counter) incorporating the slope density machine is given. of Pittsburgh-In studying pictures of impossible objects, D. A. Huffman I described a labeling technique for interpreting a two dimensional line drawing as a picture of a polyhedron (a solid three dimensional object bounded by plane surfaces). Our work extends this method to interpret a set of planes in three dimensions as a \"picture\" of a four dimensional polyhedron. Huffman labeled each line in two dimensions as either i) concave, 2) convex with one visible plane, or 3) convex with two visible planes. A labeled line drawing is a valid interpretation iff the labeled lines intersect in one of twelve legal ways. Our method is …",
"title": ""
},
{
"docid": "neg:1840431_4",
"text": "Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In this kind of graph, connectivity becomes critical in identifying highly associated groups and clusters. In this paper, we investigate the issues of mining closed frequent graphs with connectivity constraints in massive relational graphs where each graph has around 10K nodes and 1M edges. We adopt the concept of edge connectivity and apply the results from graph theory, to speed up the mining process. Two approaches are developed to handle different mining requests: CloseCut, a pattern-growth approach, and splat, a pattern-reduction approach. We have applied these methods in biological datasets and found the discovered patterns interesting.",
"title": ""
},
{
"docid": "neg:1840431_5",
"text": "Human resource management systems (HRMS) integrate human resource processes and an organization's information systems. An HRMS frequently represents one of the modules of an enterprise resource planning system (ERP). ERPs are information systems that manage the business and consist of integrated software applications such customer relations and supply chain management, manufacturing, finance and human resources. ERP implementation projects frequently have high failure rates; although research has investigated a number of factors for success and failure rates, limited attention has been directed toward the implementation teams, and how to make these more effective. In this paper we argue that shared leadership represents an appropriate approach to improving the functioning of ERP implementation teams. Shared leadership represents a form of team leadership where the team members, rather than only a single team leader, engage in leadership behaviors. While shared leadership has received increased research attention during the past decade, it has not been applied to ERP implementation teams and therefore that is the purpose of this article. Toward this end, we describe issues related to ERP and HRMS implementation, teams, and the concept of shared leadership, review theoretical and empirical literature, present an integrative framework, and describe the application of shared leadership to ERP and HRMS implementation. Published by Elsevier Inc.",
"title": ""
},
{
"docid": "neg:1840431_6",
"text": "In this paper, a new method is proposed to eliminate electrolytic capacitors in a two-stage ac-dc light-emitting diode (LED) driver. DC-biased sinusoidal or square-wave LED driving-current can help to reduce the power imbalance between ac input and dc output. In doing so, film capacitors can be adopted to improve LED driver's lifetime. The relationship between the peak-to-average ratio of the pulsating current in LEDs and the storage capacitance according to given storage capacitance is derived. Using the proposed “zero-low-level square-wave driving current” scheme, the storage capacitance in the LED driver can be reduced to 52.7% comparing with that in the driver using constant dc driving current. The input power factor is almost unity, which complies with lighting equipment standards such as IEC-1000-3-2 for Class C equipments. The voltage across the storage capacitors is analyzed and verified during the whole pulse width modulation dimming range. For the ease of dimming and implementation, a 50 W LED driver with zero-low-level square-wave driving current is built and the experimental results are presented to verify the proposed methods.",
"title": ""
},
{
"docid": "neg:1840431_7",
"text": "The use of business intelligence tools and other means to generate queries has led to great variety in the size of join queries. While most queries are reasonably small, join queries with up to a hundred relations are not that exotic anymore, and the distribution of query sizes has an incredible long tail. The largest real-world query that we are aware of accesses more than 4,000 relations. This large spread makes query optimization very challenging. Join ordering is known to be NP-hard, which means that we cannot hope to solve such large problems exactly. On the other hand most queries are much smaller, and there is no reason to sacrifice optimality there. This paper introduces an adaptive optimization framework that is able to solve most common join queries exactly, while simultaneously scaling to queries with thousands of joins. A key component there is a novel search space linearization technique that leads to near-optimal execution plans for large classes of queries. In addition, we describe implementation techniques that are necessary to scale join ordering algorithms to these extremely large queries. Extensive experiments with over 10 different approaches show that the new adaptive approach proposed here performs excellent over a huge spectrum of query sizes, and produces optimal or near-optimal solutions for most common queries.",
"title": ""
},
{
"docid": "neg:1840431_8",
"text": "We present Science Assistments, an interactive environment, which assesses students’ inquiry skills as they engage in inquiry using science microworlds. We frame our variables, tasks, assessments, and methods of analyzing data in terms of evidence-centered design. Specifically, we focus on the student model, the task model, and the evidence model in the conceptual assessment framework. In order to support both assessment and the provision of scaffolding, the environment makes inferences about student inquiry skills using models developed through a combination of text replay tagging [cf. Sao Pedro et al. 2011], a method for rapid manual coding of student log files, and educational data mining. Models were developed for multiple inquiry skills, with particular focus on detecting if students are testing their articulated hypotheses, and if they are designing controlled experiments. Student-level cross-validation was applied to validate that this approach can automatically and accurately identify these inquiry skills for new students. The resulting detectors also can be applied at run-time to drive scaffolding intervention.",
"title": ""
},
{
"docid": "neg:1840431_9",
"text": "It is difficult to fully assess the quality of software inhouse, outside the actual time and context in which it will execute after deployment. As a result, it is common for software to manifest field failures, failures that occur on user machines due to untested behavior. Field failures are typically difficult to recreate and investigate on developer platforms, and existing techniques based on crash reporting provide only limited support for this task. In this paper, we present a technique for recording, reproducing, and minimizing failing executions that enables and supports inhouse debugging of field failures. We also present a tool that implements our technique and an empirical study that evaluates the technique on a widely used e-mail client.",
"title": ""
},
{
"docid": "neg:1840431_10",
"text": "Preparation for the role of therapist can occur on both professional and personal levels. Research has found that therapists are at risk for occupationally related psychological problems. It follows that self-care may be a useful complement to the professional training of future therapists. The present study examined the effects of one approach to self-care, Mindfulness-Based Stress Reduction (MBSR), for therapists in training. Using a prospective, cohort-controlled design, the study found participants in the MBSR program reported significant declines in stress, negative affect, rumination, state and trait anxiety, and significant increases in positive affect and self-compassion. Further, MBSR participation was associated with increases in mindfulness, and this enhancement was related to several of the beneficial effects of MBSR participation. Discussion highlights the potential for future research addressing the mental health needs of therapists and therapist trainees.",
"title": ""
},
{
"docid": "neg:1840431_11",
"text": "This work addresses fine-grained image classification. Our work is based on the hypothesis that when dealing with subtle differences among object classes it is critical to identify and only account for a few informative image parts, as the remaining image context may not only be uninformative but may also hurt recognition. This motivates us to formulate our problem as a sequential search for informative parts over a deep feature map produced by a deep Convolutional Neural Network (CNN). A state of this search is a set of proposal bounding boxes in the image, whose informativeness is evaluated by the heuristic function (H), and used for generating new candidate states by the successor function (S). The two functions are unified via a Long Short-Term Memory network (LSTM) into a new deep recurrent architecture, called HSnet. Thus, HSnet (i) generates proposals of informative image parts and (ii) fuses all proposals toward final fine-grained recognition. We specify both supervised and weakly supervised training of HSnet depending on the availability of object part annotations. Evaluation on the benchmark Caltech-UCSD Birds 200-2011 and Cars-196 datasets demonstrate our competitive performance relative to the state of the art.",
"title": ""
},
{
"docid": "neg:1840431_12",
"text": "Randomization is a key element in sequential and distributed computing. Reasoning about randomized algorithms is highly non-trivial. In the 1980s, this initiated first proof methods, logics, and model-checking algorithms. The field of probabilistic verification has developed considerably since then. This paper surveys the algorithmic verification of probabilistic models, in particular probabilistic model checking. We provide an informal account of the main models, the underlying algorithms, applications from reliability and dependability analysis---and beyond---and describe recent developments towards automated parameter synthesis.",
"title": ""
},
{
"docid": "neg:1840431_13",
"text": "In this paper, we present a semi-supervised method for automatic speech act recognition in email and forums. The major challenge of this task is due to lack of labeled data in these two genres. Our method leverages labeled data in the SwitchboardDAMSL and the Meeting Recorder Dialog Act database and applies simple domain adaptation techniques over a large amount of unlabeled email and forum data to address this problem. Our method uses automatically extracted features such as phrases and dependency trees, called subtree features, for semi-supervised learning. Empirical results demonstrate that our model is effective in email and forum speech act recognition.",
"title": ""
},
{
"docid": "neg:1840431_14",
"text": "Cryptography is increasingly applied to the E-commerce world, especially to the untraceable payment system and the electronic voting system. Protocols for these systems strongly require the anonymous digital signature property, and thus a blind signature strategy is the answer to it. Chaum stated that every blind signature protocol should hold two fundamental properties, blindness and intractableness. All blind signature schemes proposed previously almost are based on the integer factorization problems, discrete logarithm problems, or the quadratic residues, which are shown by Lee et al. that none of the schemes is able to meet the two fundamental properties above. Therefore, an ECC-based blind signature scheme that possesses both the above properties is proposed in this paper.",
"title": ""
},
{
"docid": "neg:1840431_15",
"text": "BACKGROUND\nLe Fort III distraction advances the Apert midface but leaves the central concavity and vertical compression untreated. The authors propose that Le Fort II distraction and simultaneous zygomatic repositioning as a combined procedure can move the central midface and lateral orbits in independent vectors in order to improve the facial deformity. The purpose of this study was to determine whether this segmental movement results in more normal facial proportions than Le Fort III distraction.\n\n\nMETHODS\nComputed tomographic scan analyses were performed before and after distraction in patients undergoing Le Fort III distraction (n = 5) and Le Fort II distraction with simultaneous zygomatic repositioning (n = 4). The calculated axial facial ratios and vertical facial ratios relative to the skull base were compared to those of unoperated Crouzon (n = 5) and normal (n = 6) controls.\n\n\nRESULTS\nWith Le Fort III distraction, facial ratios did not change with surgery and remained lower (p < 0.01; paired t test comparison) than normal and Crouzon controls. Although the face was advanced, its shape remained abnormal. With the Le Fort II segmental movement procedure, the central face advanced and lengthened more than the lateral orbit. This differential movement changed the abnormal facial ratios that were present before surgery into ratios that were not significantly different from normal controls (p > 0.05).\n\n\nCONCLUSION\nCompared with Le Fort III distraction, Le Fort II distraction with simultaneous zygomatic repositioning normalizes the position and the shape of the Apert face.\n\n\nCLINICAL QUESTION/LEVEL OF EVIDENCE\nTherapeutic, III.",
"title": ""
},
{
"docid": "neg:1840431_16",
"text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.",
"title": ""
},
{
"docid": "neg:1840431_17",
"text": "Background A military aircraft in a hostile environment may need to use radar jamming in order to avoid being detected or engaged by the enemy. Effective jamming can require knowledge of the number and type of enemy radars; however, the radar receiver on the aircraft will observe a single stream of pulses from all radar emitters combined. It is advantageous to separate this collection of pulses into individual streams each corresponding to a particular emitter in the environment; this process is known as pulse deinterleaving. Pulse deinterleaving is critical for effective electronic warfare (EW) signal processing such as electronic attack (EA) and electronic protection (EP) because it not only aids in the identification of enemy radars but also permits the intelligent allocation of processing resources.",
"title": ""
},
{
"docid": "neg:1840431_18",
"text": "We present an algorithm for finding a meaningful vertex-to-vertex correspondence between two 3D shapes given as triangle meshes. Our algorithm operates on embeddings of the two shapes in the spectral domain so as to normalize them with respect to uniform scaling and rigid-body transformation. Invariance to shape bending is achieved by relying on geodesic point proximities on a mesh to capture its shape. To deal with stretching, we propose to use non-rigid alignment via thin-plate splines in the spectral domain. This is combined with a refinement step based on the geodesic proximities to improve dense correspondence. We show empirically that our algorithm outperforms previous spectral methods, as well as schemes that compute correspondence in the spatial domain via non-rigid iterative closest points or the use of local shape descriptors, e.g., 3D shape context",
"title": ""
},
{
"docid": "neg:1840431_19",
"text": "The term `heavy metal' is, in this context, imprecise. It should probably be reserved for those elements with an atomic mass of 200 or greater [e.g., mercury (200), thallium (204), lead (207), bismuth (209) and the thorium series]. In practice, the term has come to embrace any metal, exposure to which is clinically undesirable and which constitutes a potential hazard. Our intention in this review is to provide an overview of some general concepts of metal toxicology and to discuss in detail metals of particular importance, namely, cadmium, lead, mercury, thallium, bismuth, arsenic, antimony and tin. Poisoning from individual metals is rare in the UK, even when there is a known risk of exposure. Table 1 shows that during 1991±92 only 1 ́1% of male lead workers in the UK and 5 ́5% of female workers exceeded the legal limits for blood lead concentration. Collectively, however, poisoning with metals forms an important aspect of toxicology because of their widespread use and availability. Furthermore, hitherto unrecognized hazards and accidents continue to be described. The investigation of metal poisoning forms a distinct specialist area, since most metals are usually measured using atomic absorption techniques. Analyses require considerable expertise and meticulous attention to detail to ensure valid results. Different analytical performance standards may be required of assays used for environmental and occupational monitoring, or for solely toxicological purposes. Because of the high capital cost of good quality instruments, the relatively small numbers of tests required and the variety of metals, it is more cost-effective if such testing is carried out in regional, national or other centres having the necessary experience. Nevertheless, patients are frequently cared for locally, and clinical biochemists play a crucial role in maintaining a high index of suspicion and liaising with clinical colleagues to ensure the provision of correct samples for analysis and timely advice.",
"title": ""
}
] |
1840432 | Online Collaborative Learning for Open-Vocabulary Visual Classifiers | [
{
"docid": "pos:1840432_0",
"text": "This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"title": ""
}
] | [
{
"docid": "neg:1840432_0",
"text": "Recurrent neural networks are now the state-of-the-art in natural language processing because they can build rich contextual representations and process texts of arbitrary length. However, recent developments on attention mechanisms have equipped feedforward networks with similar capabilities, hence enabling faster computations due to the increase in the number of operations that can be parallelized. We explore this new type of architecture in the domain of question-answering and propose a novel approach that we call Fully Attention Based Information Retriever (FABIR). We show that FABIR achieves competitive results in the Stanford Question Answering Dataset (SQuAD) while having fewer parameters and being faster at both learning and inference than rival methods.",
"title": ""
},
{
"docid": "neg:1840432_1",
"text": "Deep neural networks have shown good data modelling capabilities when dealing with challenging and large datasets from a wide range of application areas. Convolutional Neural Networks (CNNs) offer advantages in selecting good features and Long Short-Term Memory (LSTM) networks have proven good abilities of learning sequential data. Both approaches have been reported to provide improved results in areas such image processing, voice recognition, language translation and other Natural Language Processing (NLP) tasks. Sentiment classification for short text messages from Twitter is a challenging task, and the complexity increases for Arabic language sentiment classification tasks because Arabic is a rich language in morphology. In addition, the availability of accurate pre-processing tools for Arabic is another current limitation, along with limited research available in this area. In this paper, we investigate the benefits of integrating CNNs and LSTMs and report obtained improved accuracy for Arabic sentiment analysis on different datasets. Additionally, we seek to consider the morphological diversity of particular Arabic words by using different sentiment classification levels.",
"title": ""
},
{
"docid": "neg:1840432_2",
"text": "Internet exchange points (IXPs) are an important ingredient of the Internet AS-level ecosystem - a logical fabric of the Internet made up of about 30,000 ASes and their mutual business relationships whose primary purpose is to control and manage the flow of traffic. Despite the IXPs' critical role in this fabric, little is known about them in terms of their peering matrices (i.e., who peers with whom at which IXP) and corresponding traffic matrices (i.e., how much traffic do the different ASes that peer at an IXP exchange with one another). In this paper, we report on an Internet-wide traceroute study that was specifically designed to shed light on the unknown IXP-specific peering matrices and involves targeted traceroutes from publicly available and geographically dispersed vantage points. Based on our method, we were able to discover and validate the existence of about 44K IXP-specific peering links - nearly 18K more links than were previously known. In the process, we also classified all known IXPs depending on the type of information required to detect them. Moreover, in view of the currently used inferred AS-level maps of the Internet that are known to miss a significant portion of the actual AS relationships of the peer-to-peer type, our study provides a new method for augmenting these maps with IXP-related peering links in a systematic and informed manner.",
"title": ""
},
{
"docid": "neg:1840432_3",
"text": "Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core communication device in people's everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environmental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.",
"title": ""
},
{
"docid": "neg:1840432_4",
"text": "We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.",
"title": ""
},
{
"docid": "neg:1840432_5",
"text": "The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective. However, this technique does not easily apply to commonly used distributions such as beta or gamma without further approximations, and most practical applications of the reparameterization gradient fit Gaussian distributions. In this paper, we introduce the generalized reparameterization gradient, a method that extends the reparameterization gradient to a wider class of variational distributions. Generalized reparameterizations use invertible transformations of the latent variables which lead to transformed distributions that weakly depend on the variational parameters. This results in new Monte Carlo gradients that combine reparameterization gradients and score function gradients. We demonstrate our approach on variational inference for two complex probabilistic models. The generalized reparameterization is e ective: even a single sample from the variational distribution is enough to obtain a low-variance gradient.",
"title": ""
},
{
"docid": "neg:1840432_6",
"text": "We define a language G for querying data represented as a labeled graph G. By considering G as a relation, this graphical query language can be viewed as a relational query language, and its expressive power can be compared to that of other relational query languages. We do not propose G as an alternative to general purpose relational query languages, but rather as a complementary language in which recursive queries are simple to formulate. The user is aided in this formulation by means of a graphical interface. The provision of regular expressions in G allows recursive queries more general than transitive closure to be posed, although the language is not as powerful as those based on function-free Horn clauses. However, we hope to be able to exploit well-known graph algorithms in evaluating recursive queries efficiently, a topic which has received widespread attention recently.",
"title": ""
},
{
"docid": "neg:1840432_7",
"text": "In this literature a new design of printed antipodal UWB vivaldi antenna is proposed. The design is further modified for acquiring notch characteristics in the WLAN band and high front to backlobe ratio (F/B). The modifications are done on the ground plane of the antenna. Previous literatures have shown that the incorporation of planar meta-material structures on the CPW plane along the feed can produce notch characteristics. Here, a novel concept is introduced regarding antipodal vivaldi antenna. In the ground plane of the antenna, square ring resonator (SRR) structure slot and circular ring resonator (CRR) structure slot are cut to produce the notch characteristic on the WLAN band. The designed antenna covers a bandwidth of 6.8 GHz (2.7 GHz–9.5 GHz) and it can be useful for a large range of wireless applications like satellite communication applications and biomedical applications where directional radiation characteristic is needed. The designed antenna shows better impedance matching in the above said band. A parametric study is also performed on the antenna design to optimize the performance of the antenna. The size of the antenna is 40×44×1.57 mm3. It is designed and simulated using HFSS. The presented prototype offers well directive radiation characteristics, good gain and efficiency.",
"title": ""
},
{
"docid": "neg:1840432_8",
"text": "OBJECTIVE\nTo describe a new surgical technique to treat pectus excavatum utilizing low hardness solid silicone block that can be carved during the intraoperative period promoting a better aesthetic result.\n\n\nMETHODS\nBetween May 1994 and February 2013, 34 male patients presenting pectus excavatum were submitted to surgical repair with the use of low hardness solid silicone block, 10 to 30 Shore A. A block-shaped parallelepiped was used with height and base size coinciding with those of the bone defect. The block was carved intraoperatively according to the shape of the dissected space. The patients were followed for a minimum of 120 days postoperatively. The results and the complications were recorded.\n\n\nRESULTS\nFrom the 34 patients operated on, 28 were primary surgeries and 6 were secondary treatment, using other surgical techniques, bone or implant procedures. Postoperative complications included two case of hematomas and eight of seromas. It was necessary to remove the implant in one patient due to pain, and review surgery was performed in another to check prothesis dimensions. Two patients were submitted to fat grafting to improve the chest wall contour. The result was considered satisfactory in 33 patients.\n\n\nCONCLUSION\nThe procedure proved to be fast and effective. The results of carved silicone block were more effective for allowing a more refined contour as compared to custom made implants.",
"title": ""
},
{
"docid": "neg:1840432_9",
"text": "Embedded systems have found a very strong foothold in global Information Technology (IT) market since they can provide very specialized and intricate functionality to a wide range of products. On the other hand, the migration of IT functionality to a plethora of new smart devices (like mobile phones, cars, aviation, game or households machines) has enabled the collection of a considerable number of data that can be characterized sensitive. Therefore, there is a need for protecting that data through IT security means. However, eare usually dployed in hostile environments where they can be easily subject of physical attacks. In this paper, we provide an overview from ES hardware perspective of methods and mechanisms for providing strong security and trust. The various categories of physical attacks on security related embedded systems are presented along with countermeasures to thwart them and the importance of reconfigurable logic flexibility, adaptability and scalability along with trust protection mechanisms is highlighted. We adopt those mechanisms in order to propose a FPGA based embedded system hardware architecture capable of providing security and trust along with physical attack protection using trust zone separation. The benefits of such approach are discussed and a subsystem of the proposed architecture is implemented in FPGA technology as a proof of concept case study. From the performed analysis and implementation, it is concluded that flexibility, security and trust are fully realistic options for embedded system security enhancement. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840432_10",
"text": "OBJECTIVE\nThe objective of this research was to explore the effects of risperidone on cognitive processes in children with autism and irritable behavior.\n\n\nMETHOD\nThirty-eight children, ages 5-17 years with autism and severe behavioral disturbance, were randomly assigned to risperidone (0.5 to 3.5 mg/day) or placebo for 8 weeks. This sample of 38 was a subset of 101 subjects who participated in the clinical trial; 63 were unable to perform the cognitive tasks. A double-blind placebo-controlled parallel groups design was used. Dependent measures included tests of sustained attention, verbal learning, hand-eye coordination, and spatial memory assessed before, during, and after the 8-week treatment. Changes in performance were compared by repeated measures ANOVA.\n\n\nRESULTS\nTwenty-nine boys and 9 girls with autism and severe behavioral disturbance and a mental age >or=18 months completed the cognitive part of the study. No decline in performance occurred with risperidone. Performance on a cancellation task (number of correct detections) and a verbal learning task (word recognition) was better on risperidone than on placebo (without correction for multiplicity). Equivocal improvement also occurred on a spatial memory task. There were no significant differences between treatment conditions on the Purdue Pegboard (hand-eye coordination) task or the Analog Classroom Task (timed math test).\n\n\nCONCLUSION\nRisperidone given to children with autism at doses up to 3.5 mg for up to 8 weeks appears to have no detrimental effect on cognitive performance.",
"title": ""
},
{
"docid": "neg:1840432_11",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "neg:1840432_12",
"text": "The immune system responds to pathogens by a variety of pattern recognition molecules such as the Toll-like receptors (TLRs), which promote recognition of dangerous foreign pathogens. However, recent evidence indicates that normal intestinal microbiota might also positively influence immune responses, and protect against the development of inflammatory diseases. One of these elements may be short-chain fatty acids (SCFAs), which are produced by fermentation of dietary fibre by intestinal microbiota. A feature of human ulcerative colitis and other colitic diseases is a change in ‘healthy’ microbiota such as Bifidobacterium and Bacteriodes, and a concurrent reduction in SCFAs. Moreover, increased intake of fermentable dietary fibre, or SCFAs, seems to be clinically beneficial in the treatment of colitis. SCFAs bind the G-protein-coupled receptor 43 (GPR43, also known as FFAR2), and here we show that SCFA–GPR43 interactions profoundly affect inflammatory responses. Stimulation of GPR43 by SCFAs was necessary for the normal resolution of certain inflammatory responses, because GPR43-deficient (Gpr43-/-) mice showed exacerbated or unresolving inflammation in models of colitis, arthritis and asthma. This seemed to relate to increased production of inflammatory mediators by Gpr43-/- immune cells, and increased immune cell recruitment. Germ-free mice, which are devoid of bacteria and express little or no SCFAs, showed a similar dysregulation of certain inflammatory responses. GPR43 binding of SCFAs potentially provides a molecular link between diet, gastrointestinal bacterial metabolism, and immune and inflammatory responses.",
"title": ""
},
{
"docid": "neg:1840432_13",
"text": "Dictionary learning has been widely used in many image processing tasks. In most of these methods, the number of basis vectors is either set by experience or coarsely evaluated empirically. In this paper, we propose a new scale adaptive dictionary learning framework, which jointly estimates suitable scales and corresponding atoms in an adaptive fashion according to the training data, without the need of prior information. We design an atom counting function and develop a reliable numerical scheme to solve the challenging optimization problem. Extensive experiments on texture and video data sets demonstrate quantitatively and visually that our method can estimate the scale, without damaging the sparse reconstruction ability.",
"title": ""
},
{
"docid": "neg:1840432_14",
"text": "Redox-based resistive switching devices (ReRAM) are an emerging class of nonvolatile storage elements suited for nanoscale memory applications. In terms of logic operations, ReRAM devices were suggested to be used as programmable interconnects, large-scale look-up tables or for sequential logic operations. However, without additional selector devices these approaches are not suited for use in large scale nanocrossbar memory arrays, which is the preferred architecture for ReRAM devices due to the minimum area consumption. To overcome this issue for the sequential logic approach, we recently introduced a novel concept, which is suited for passive crossbar arrays using complementary resistive switches (CRSs). CRS cells offer two high resistive storage states, and thus, parasitic “sneak” currents are efficiently avoided. However, until now the CRS-based logic-in-memory approach was only shown to be able to perform basic Boolean logic operations using a single CRS cell. In this paper, we introduce two multi-bit adder schemes using the CRS-based logic-in-memory approach. We proof the concepts by means of SPICE simulations using a dynamical memristive device model of a ReRAM cell. Finally, we show the advantages of our novel adder concept in terms of step count and number of devices in comparison to a recently published adder approach, which applies the conventional ReRAM-based sequential logic concept introduced by Borghetti et al.",
"title": ""
},
{
"docid": "neg:1840432_15",
"text": "Our aim is to make shape memory alloys (SMAs) accessible and visible as creative crafting materials by combining them with paper. In this paper, we begin by presenting mechanisms for actuating paper with SMAs along with a set of design guidelines for achieving dramatic movement. We then describe how we tested the usability and educational potential of one of these mechanisms in a workshop where participants, age 9 to 15, made actuated electronic origami cranes. We found that participants were able to successfully build constructions integrating SMAs and paper, that they enjoyed doing so, and were able to learn skills like circuitry design and soldering over the course of the workshop.",
"title": ""
},
{
"docid": "neg:1840432_16",
"text": "Human Resource is the most important asset for any organization and it is the resource of achieving competitive advantage. Managing human resources is very challenging as compared to managing technology or capital and for its effective management, organization requires effective HRM system. HRM system should be backed up by strong HRM practices. HRM practices refer to organizational activities directed at managing the group of human resources and ensuring that the resources are employed towards the fulfillment of organizational goals. The purpose of this study is to explore contribution of Human Resource Management (HRM) practices including selection, training, career planning, compensation, performance appraisal, job definition and employee participation on perceived employee performance. This research describe why human resource management (HRM) decisions are likely to have an important and unique influence on organizational performance. This research forum will help advance research on the link between HRM and organizational performance. Unresolved questions is trying to identify in need of future study and make several suggestions intended to help researchers studying these questions build a more cumulative body of knowledge that will have key implications for body theory and practice. This study comprehensively evaluated the links between systems of High Performance Work Practices and firm performance. Results based on a national sample of firms indicate that these practices have an economically and statistically significant impact on employee performance. Support for predictions that the impact of High Performance Work Practices on firm performance is in part contingent on their interrelationships and links with competitive strategy was limited.",
"title": ""
},
{
"docid": "neg:1840432_17",
"text": "Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. \n We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.",
"title": ""
},
{
"docid": "neg:1840432_18",
"text": "Researchers have explored the design of ambient information systems across a wide range of physical and screen-based media. This work has yielded rich examples of design approaches to the problem of presenting information about a user's world in a way that is not distracting, but is aesthetically pleasing, and tangible to varying degrees. Despite these successes, accumulating theoretical and craft knowledge has been stymied by the lack of a unified vocabulary to describe these systems and a consequent lack of a framework for understanding their design attributes. We argue that this area would significantly benefit from consensus about the design space of ambient information systems and the design attributes that define and distinguish existing approaches. We present a definition of ambient information systems and a taxonomy across four design dimensions: Information Capacity, Notification Level, Representational Fidelity, and Aesthetic Emphasis. Our analysis has uncovered four patterns of system design and points to unexplored regions of the design space, which may motivate future work in the field.",
"title": ""
},
{
"docid": "neg:1840432_19",
"text": "In autonomous drone racing, a drone is required to fly through the gates quickly without any collision. Therefore, it is important to detect the gates reliably using computer vision. However, due to the complications such as varying lighting conditions and gates seen overlapped, traditional image processing algorithms based on color and geometry of the gates tend to fail during the actual racing. In this letter, we introduce a convolutional neural network to estimate the center of a gate robustly. Using the detection results, we apply a line-of-sight guidance algorithm. The proposed algorithm is implemented using low cost, off-the-shelf hardware for validation. All vision processing is performed in real time on the onboard NVIDIA Jetson TX2 embedded computer. In a number of tests our proposed framework successfully exhibited fast and reliable detection and navigation performance in indoor environment.",
"title": ""
}
] |
1840433 | Towards Bayesian Deep Learning: A Survey | [
{
"docid": "pos:1840433_0",
"text": "In this paper we introduce a novel collapsed Gibbs sampling method for the widely used latent Dirichlet allocation (LDA) model. Our new method results in significant speedups on real world text corpora. Conventional Gibbs sampling schemes for LDA require O(K) operations per sample where K is the number of topics in the model. Our proposed method draws equivalent samples but requires on average significantly less then K operations per sample. On real-word corpora FastLDA can be as much as 8 times faster than the standard collapsed Gibbs sampler for LDA. No approximations are necessary, and we show that our fast sampling scheme produces exactly the same results as the standard (but slower) sampling scheme. Experiments on four real world data sets demonstrate speedups for a wide range of collection sizes. For the PubMed collection of over 8 million documents with a required computation time of 6 CPU months for LDA, our speedup of 5.7 can save 5 CPU months of computation.",
"title": ""
},
{
"docid": "pos:1840433_1",
"text": "Due to its successful application in recommender systems, collaborative filtering (CF) has become a hot research topic in data mining and information retrieval. In traditional CF methods, only the feedback matrix, which contains either explicit feedback (also called ratings) or implicit feedback on the items given by users, is used for training and prediction. Typically, the feedback matrix is sparse, which means that most users interact with few items. Due to this sparsity problem, traditional CF with only feedback information will suffer from unsatisfactory performance. Recently, many researchers have proposed to utilize auxiliary information, such as item content (attributes), to alleviate the data sparsity problem in CF. Collaborative topic regression (CTR) is one of these methods which has achieved promising performance by successfully integrating both feedback information and item content information. In many real applications, besides the feedback and item content information, there may exist relations (also known as networks) among the items which can be helpful for recommendation. In this paper, we develop a novel hierarchical Bayesian model called Relational Collaborative Topic Regression (RCTR), which extends CTR by seamlessly integrating the user-item feedback information, item content information, and network structure among items into the same model. Experiments on real-world datasets show that our model can achieve better prediction accuracy than the state-of-the-art methods with lower empirical training time. Moreover, RCTR can learn good interpretable latent structures which are useful for recommendation.",
"title": ""
}
] | [
{
"docid": "neg:1840433_0",
"text": "While vehicle license plate recognition (VLPR) is usually done with a sliding window approach, it can have limited performance on datasets with characters that are of variable width. This can be solved by hand-crafting algorithms to prescale the characters. While this approach can work fairly well, the recognizer is only aware of the pixels within each detector window, and fails to account for other contextual information that might be present in other parts of the image. A sliding window approach also requires training data in the form of presegmented characters, which can be more difficult to obtain. In this paper, we propose a unified ConvNet-RNN model to recognize real-world captured license plate photographs. By using a Convolutional Neural Network (ConvNet) to perform feature extraction and using a Recurrent Neural Network (RNN) for sequencing, we address the problem of sliding window approaches being unable to access the context of the entire image by feeding the entire image as input to the ConvNet. This has the added benefit of being able to perform end-to-end training of the entire model on labelled, full license plate images. Experimental results comparing the ConvNet-RNN architecture to a sliding window-based approach shows that the ConvNet-RNN architecture performs significantly better. Keywords—Vehicle license plate recognition, end-to-end recognition, ConvNet-RNN, segmentation-free recognition",
"title": ""
},
{
"docid": "neg:1840433_1",
"text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.",
"title": ""
},
{
"docid": "neg:1840433_2",
"text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.",
"title": ""
},
{
"docid": "neg:1840433_3",
"text": "I present a new way to parallelize the training of convolutional neural networks across multiple GPUs. The method scales significantly better than all alternatives when applied to modern convolutional neural",
"title": ""
},
{
"docid": "neg:1840433_4",
"text": "The thermal conductivities of individual single crystalline intrinsic Si nanowires with diameters of 22, 37, 56, and 115 nm were measured using a microfabricated suspended device over a temperature range of 20–320 K. Although the nanowires had well-defined crystalline order, the thermal conductivity observed was more than two orders of magnitude lower than the bulk value. The strong diameter dependence of thermal conductivity in nanowires was ascribed to the increased phonon-boundary scattering and possible phonon spectrum modification. © 2003 American Institute of Physics.@DOI: 10.1063/1.1616981 #",
"title": ""
},
{
"docid": "neg:1840433_5",
"text": "Microtia is a congenital disease with various degrees of severity, ranging from the presence of rudimentary and malformed vestigial structures to the total absence of the ear (anotia). The complex anatomy of the external ear and the necessity to provide good projection and symmetry make this reconstruction particularly difficult. The aim of this work is to report our surgical technique of microtic ear correction and to analyse the short and long term results. From 2000 to 2013, 210 patients affected by microtia were treated at the Maxillo-Facial Surgery Division, Head and Neck Department, University Hospital of Parma. The patient population consisted of 95 women and 115 men, aged from 7 to 49 years. A total of 225 reconstructions have been performed in two surgical stages basing of Firmin's technique with some modifications and refinements. The first stage consists in fabrication and grafting of a three-dimensional costal cartilage framework. The second stage is performed 5-6 months later: the reconstructed ear is raised up and an additional cartilaginous graft is used to increase its projection. A mastoid fascial flap together with a skin graft are then used to protect the cartilage graft. All reconstructions were performed without any major complication. The results have been considered satisfactory by all patients starting from the first surgical step. Low morbidity, the good results obtained and a high rate of patient satisfaction make our protocol an optimal choice for treatment of microtia. The surgeon's experience and postoperative patient care must be considered as essential aspects of treatment.",
"title": ""
},
{
"docid": "neg:1840433_6",
"text": "Traffic and power generation are the main sources of urban air pollution. The idea that outdoor air pollution can cause exacerbations of pre-existing asthma is supported by an evidence base that has been accumulating for several decades, with several studies suggesting a contribution to new-onset asthma as well. In this Series paper, we discuss the effects of particulate matter (PM), gaseous pollutants (ozone, nitrogen dioxide, and sulphur dioxide), and mixed traffic-related air pollution. We focus on clinical studies, both epidemiological and experimental, published in the previous 5 years. From a mechanistic perspective, air pollutants probably cause oxidative injury to the airways, leading to inflammation, remodelling, and increased risk of sensitisation. Although several pollutants have been linked to new-onset asthma, the strength of the evidence is variable. We also discuss clinical implications, policy issues, and research gaps relevant to air pollution and asthma.",
"title": ""
},
{
"docid": "neg:1840433_7",
"text": "To maintain the integrity of an organism constantly challenged by pathogens, the immune system is endowed with a variety of cell types. B lymphocytes were initially thought to only play a role in the adaptive branch of immunity. However, a number of converging observations revealed that two B-cell subsets, marginal zone (MZ) and B1 cells, exhibit unique developmental and functional characteristics, and can contribute to innate immune responses. In addition to their capacity to mount a local antibody response against type-2 T-cell-independent (TI-2) antigens, MZ B-cells can participate to T-cell-dependent (TD) immune responses through the capture and import of blood-borne antigens to follicular areas of the spleen. Here, we discuss the multiple roles of MZ B-cells in humans, non-human primates, and rodents. We also summarize studies - performed in transgenic mice expressing fully human antibodies on their B-cells and in macaques whose infection with Simian immunodeficiency virus (SIV) represents a suitable model for HIV-1 infection in humans - showing that infectious agents have developed strategies to subvert MZ B-cell functions. In these two experimental models, we observed that two microbial superantigens for B-cells (protein A from Staphylococcus aureus and protein L from Peptostreptococcus magnus) as well as inactivated AT-2 virions of HIV-1 and infectious SIV preferentially deplete innate-like B-cells - MZ B-cells and/or B1 B-cells - with different consequences on TI and TD antibody responses. These data revealed that viruses and bacteria have developed strategies to deplete innate-like B-cells during the acute phase of infection and to impair the antibody response. Unraveling the intimate mechanisms responsible for targeting MZ B-cells in humans will be important for understanding disease pathogenesis and for designing novel vaccine strategies.",
"title": ""
},
{
"docid": "neg:1840433_8",
"text": "Non-functional requirements are an important, and often critical, aspect of any software system. However, determining the degree to which any particular software system meets such requirements and incorporating such considerations into the software design process is a difficult challenge. This paper presents a modification of the NFR framework that allows for the discovery of a set of system functionalities that optimally satisfice a given set of non-functional requirements. This new technique introduces an adaptation of softgoal interdependency graphs, denoted softgoal interdependency ruleset graphs, in which label propagation can be done consistently. This facilitates the use of optimisation algorithms to determine the best set of bottom-level operationalizing softgoals that optimally satisfice the highest-level NFR softgoals. The proposed method also introduces the capacity to incorporate both qualitative and quantitative information.",
"title": ""
},
{
"docid": "neg:1840433_9",
"text": "Mobile-edge computing (MEC) is an emerging paradigm that provides a capillary distribution of cloud computing capabilities to the edge of the wireless access network, enabling rich services and applications in close proximity to the end users. In this paper, an MEC enabled multi-cell wireless network is considered where each base station (BS) is equipped with a MEC server that assists mobile users in executing computation-intensive tasks via task offloading. The problem of joint task offloading and resource allocation is studied in order to maximize the users’ task offloading gains, which is measured by a weighted sum of reductions in task completion time and energy consumption. The considered problem is formulated as a mixed integer nonlinear program (MINLP) that involves jointly optimizing the task offloading decision, uplink transmission power of mobile users, and computing resource allocation at the MEC servers. Due to the combinatorial nature of this problem, solving for optimal solution is difficult and impractical for a large-scale network. To overcome this drawback, we propose to decompose the original problem into a resource allocation (RA) problem with fixed task offloading decision and a task offloading (TO) problem that optimizes the optimal-value function corresponding to the RA problem. We address the RA problem using convex and quasi-convex optimization techniques, and propose a novel heuristic algorithm to the TO problem that achieves a suboptimal solution in polynomial time. Simulation results show that our algorithm performs closely to the optimal solution and that it significantly improves the users’ offloading utility over traditional approaches.",
"title": ""
},
{
"docid": "neg:1840433_10",
"text": "This paper proposes a simple, cost-effective, and efficient brushless dc (BLDC) motor drive for solar photovoltaic (SPV) array-fed water pumping system. A zeta converter is utilized to extract the maximum available power from the SPV array. The proposed control algorithm eliminates phase current sensors and adapts a fundamental frequency switching of the voltage source inverter (VSI), thus avoiding the power losses due to high frequency switching. No additional control or circuitry is used for speed control of the BLDC motor. The speed is controlled through a variable dc link voltage of VSI. An appropriate control of zeta converter through the incremental conductance maximum power point tracking (INC-MPPT) algorithm offers soft starting of the BLDC motor. The proposed water pumping system is designed and modeled such that the performance is not affected under dynamic conditions. The suitability of proposed system at practical operating conditions is demonstrated through simulation results using MATLAB/Simulink followed by an experimental validation.",
"title": ""
},
{
"docid": "neg:1840433_11",
"text": "Approximate set membership data structures (ASMDSs) are ubiquitous in computing. They trade a tunable, often small, error rate ( ) for large space savings. The canonical ASMDS is the Bloom filter, which supports lookups and insertions but not deletions in its simplest form. Cuckoo filters (CFs), a recently proposed class of ASMDSs, add deletion support and often use fewer bits per item for equal . This work introduces the Morton filter (MF), a novel ASMDS that introduces several key improvements to CFs. Like CFs, MFs support lookups, insertions, and deletions, but improve their respective throughputs by 1.3× to 2.5×, 0.9× to 15.5×, and 1.3× to 1.6×. MFs achieve these improvements by (1) introducing a compressed format that permits a logically sparse filter to be stored compactly in memory, (2) leveraging succinct embedded metadata to prune unnecessary memory accesses, and (3) heavily biasing insertions to use a single hash function. With these optimizations, lookups, insertions, and deletions often only require accessing a single hardware cache line from the filter. These improvements are not at a loss in space efficiency, as MFs typically use comparable to slightly less space than CFs for the same . PVLDB Reference Format: Alex D. Breslow and Nuwan S. Jayasena. Morton Filters: Faster, Space-Efficient Cuckoo Filters via Biasing, Compression, and Decoupled Logical Sparsity. PVLDB, 11(9): 1041-1055, 2018. DOI: https://doi.org/10.14778/3213880.3213884",
"title": ""
},
{
"docid": "neg:1840433_12",
"text": "Climate change, pollution, and energy insecurity are among the greatest problems of our time. Addressing them requires major changes in our energy infrastructure. Here, we analyze the feasibility of providing worldwide energy for all purposes (electric power, transportation, heating/cooling, etc.) from wind, water, and sunlight (WWS). In Part I, we discuss WWS energy system characteristics, current and future energy demand, availability of WWS resources, numbers of WWS devices, and area and material requirements. In Part II, we address variability, economics, and policy of WWS energy. We estimate that !3,800,000 5 MW wind turbines, !49,000 300 MW concentrated solar plants, !40,000 300 MW solar PV power plants, !1.7 billion 3 kW rooftop PV systems, !5350 100 MWgeothermal power plants, !270 new 1300 MW hydroelectric power plants, !720,000 0.75 MWwave devices, and !490,000 1 MW tidal turbines can power a 2030 WWS world that uses electricity and electrolytic hydrogen for all purposes. Such a WWS infrastructure reduces world power demand by 30% and requires only !0.41% and !0.59% more of the world’s land for footprint and spacing, respectively. We suggest producing all new energy withWWSby 2030 and replacing the pre-existing energy by 2050. Barriers to the plan are primarily social and political, not technological or economic. The energy cost in a WWS world should be similar to",
"title": ""
},
{
"docid": "neg:1840433_13",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "neg:1840433_14",
"text": "This article explains how entrepreneurship can help resolve the environmental problems of global socio-economic systems. Environmental economics concludes that environmental degradation results from the failure of markets, whereas the entrepreneurship literature argues that opportunities are inherent in market failure. A synthesis of these literatures suggests that environmentally relevant market failures represent opportunities for achieving profitability while simultaneously reducing environmentally degrading economic behaviors. It also implies conceptualizations of sustainable and environmental entrepreneurship which detail how entrepreneurs seize the opportunities that are inherent in environmentally relevant market failures. Finally, the article examines the ability of the proposed theoretical framework to transcend its environmental context and provide insight into expanding the domain of the study of entrepreneurship. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840433_15",
"text": "To achieve the concept of smart roads, intelligent sensors are being placed on the roadways to collect real-time traffic streams. Traditional method is not a real-time response, and incurs high communication and storage costs. Existing distributed stream mining algorithms do not consider the resource limitation on the lightweight devices such as sensors. In this paper, we propose a distributed traffic stream mining system. The central server performs various data mining tasks only in the training and updating stage and sends the interesting patterns to the sensors. The sensors monitor and predict the coming traffic or raise alarms independently by comparing with the patterns observed in the historical streams. The sensors provide real-time response with less wireless communication and small resource requirement, and the computation burden on the central server is reduced. We evaluate our system on the real highway traffic streams in the GCM Transportation Corridor in Chicagoland.",
"title": ""
},
{
"docid": "neg:1840433_16",
"text": "Gunaratna, Kalpa. PhD, Department of Computer Science and Engineering, Wright State University, 2017. Semantics-based Summarization of Entities in Knowledge Graphs. The processing of structured and semi-structured content on the Web has been gaining attention with the rapid progress in the Linking Open Data project and the development of commercial knowledge graphs. Knowledge graphs capture domain-specific or encyclopedic knowledge in the form of a data layer and add rich and explicit semantics on top of the data layer to infer additional knowledge. The data layer of a knowledge graph represents entities and their descriptions. The semantic layer on top of the data layer is called the schema (ontology), where relationships of the entity descriptions, their classes, and the hierarchy of the relationships and classes are defined. Today, there exist large knowledge graphs in the research community (e.g., encyclopedic datasets like DBpedia and Yago) and corporate world (e.g., Google knowledge graph) that encapsulate a large amount of knowledge for human and machine consumption. Typically, they consist of millions of entities and billions of facts describing these entities. While it is good to have this much knowledge available on the Web for consumption, it leads to information overload, and hence proper summarization (and presentation) techniques need to be explored. In this dissertation, we focus on creating both comprehensive and concise entity summaries at: (i) the single entity level and (ii) the multiple entity level. To summarize a single entity, we propose a novel approach called FACeted Entity Summarization (FACES) that considers importance, which is computed by combining popularity and uniqueness, and diversity of facts getting selected for the summary. We first conceptually group facts using semantic expansion and hierarchical incremental clustering techniques and form facets (i.e., groupings) that go beyond syntactic similarity. Then we rank both the facts and facets using Information Retrieval (IR) ranking techniques to pick the",
"title": ""
},
{
"docid": "neg:1840433_17",
"text": "ITIL is the most widely used IT framework in majority of organizations in the world now. However, implementing such best practice experiences in an organization comes with some implementation challenges such as staff resistance, task conflicts and ambiguous orders. It means that implementing such framework is not easy and it can be caused of the organization destruction. This paper tries to describe overall view of ITIL framework and address major reasons on the failure of this framework’s implementation in the organizations",
"title": ""
},
{
"docid": "neg:1840433_18",
"text": "7SK RNA is a key player in the regulation of polymerase II transcription. 7SK RNA was considered as a highly conserved vertebrate innovation. The discovery of poorly conserved homologs in several insects and lophotrochozoans, however, implies a much earlier evolutionary origin. The mechanism of 7SK function requires interaction with the proteins HEXIM and La-related protein 7. Here, we present a comprehensive computational analysis of these two proteins in metazoa, and we extend the collection of 7SK RNAs by several additional candidates. In particular, we describe 7SK homologs in Caenorhabditis species. Furthermore, we derive an improved secondary structure model of 7SK RNA, which shows that the structure is quite well-conserved across animal phyla despite the extreme divergence at sequence level.",
"title": ""
}
] |
1840434 | An effective solution for a real cutting stock problem in manufacturing plastic rolls | [
{
"docid": "pos:1840434_0",
"text": "This paper discusses some of the basic formulation issues and solution procedures for solving oneand twodimensional cutting stock problems. Linear programming, sequential heuristic and hybrid solution procedures are described. For two-dimensional cutting stock problems with rectangular shapes, we also propose an approach for solving large problems with limits on the number of times an ordered size may appear in a pattern.",
"title": ""
}
] | [
{
"docid": "neg:1840434_0",
"text": "Natural language processing has been in existence for more than fifty years. During this time, it has significantly contributed to the field of human-computer interaction in terms of theoretical results and practical applications. As computers continue to become more affordable and accessible, the importance of user interfaces that are effective, robust, unobtrusive, and user-friendly – regardless of user expertise or impediments – becomes more pronounced. Since natural language usually provides for effortless and effective communication in human-human interaction, its significance and potential in human-computer interaction should not be overlooked – either spoken or typewritten, it may effectively complement other available modalities, such as windows, icons, and menus, and pointing; in some cases, such as in users with disabilities, natural language may even be the only applicable modality. This chapter examines the field of natural language processing as it relates to humancomputer interaction by focusing on its history, interactive application areas, theoretical approaches to linguistic modeling, and relevant computational and philosophical issues. It also presents a taxonomy for interactive natural language systems based on their linguistic knowledge and processing requirements, and reviews related applications. Finally, it discusses linguistic coverage issues, and explores the development of natural language widgets and their integration into multimodal user interfaces.",
"title": ""
},
{
"docid": "neg:1840434_1",
"text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.",
"title": ""
},
{
"docid": "neg:1840434_2",
"text": "Many functionals have been proposed for validation of partitions of object data produced by the fuzzy c-means (FCM) clustering algorithm. We examine the role a subtle but important parameter-the weighting exponent m of the FCM model-plays in determining the validity of FCM partitions. The functionals considered are the partition coefficient and entropy indexes of Bezdek, the Xie-Beni, and extended Xie-Beni indexes, and the FukuyamaSugeno index. Limit analysis indicates, and numerical experiments confirm, that the FukuyamaSugeno index is sensitive to both high and low values of m and may be unreliable because of this. Of the indexes tested, the Xie-Beni index provided the best response over a wide range of choices for the number of clusters, (%lo), and for m from 1.01-7. Finally, our calculations suggest that the best choice for m is probably in the interval [U, 2.51, whose mean and midpoint, m = 2, have often been the preferred choice for many users of FCM.",
"title": ""
},
{
"docid": "neg:1840434_3",
"text": "Post-traumatic stress disorder (PTSD) is accompanied by disturbed sleep and an impaired ability to learn and remember extinction of conditioned fear. Following a traumatic event, the full spectrum of PTSD symptoms typically requires several months to develop. During this time, sleep disturbances such as insomnia, nightmares, and fragmented rapid eye movement sleep predict later development of PTSD symptoms. Only a minority of individuals exposed to trauma go on to develop PTSD. We hypothesize that sleep disturbance resulting from an acute trauma, or predating the traumatic experience, may contribute to the etiology of PTSD. Because symptoms can worsen over time, we suggest that continued sleep disturbances can also maintain and exacerbate PTSD. Sleep disturbance may result in failure of extinction memory to persist and generalize, and we suggest that this constitutes one, non-exclusive mechanism by which poor sleep contributes to the development and perpetuation of PTSD. Also reviewed are neuroendocrine systems that show abnormalities in PTSD, and in which stress responses and sleep disturbance potentially produce synergistic effects that interfere with extinction learning and memory. Preliminary evidence that insomnia alone can disrupt sleep-dependent emotional processes including consolidation of extinction memory is also discussed. We suggest that optimizing sleep quality following trauma, and even strategically timing sleep to strengthen extinction memories therapeutically instantiated during exposure therapy, may allow sleep itself to be recruited in the treatment of PTSD and other trauma and stress-related disorders.",
"title": ""
},
{
"docid": "neg:1840434_4",
"text": "The dynamics of spontaneous fluctuations in neural activity are shaped by underlying patterns of anatomical connectivity. While numerous studies have demonstrated edge-wise correspondence between structural and functional connections, much less is known about how large-scale coherent functional network patterns emerge from the topology of structural networks. In the present study, we deploy a multivariate statistical technique, partial least squares, to investigate the association between spatially extended structural networks and functional networks. We find multiple statistically robust patterns, reflecting reliable combinations of structural and functional subnetworks that are optimally associated with one another. Importantly, these patterns generally do not show a one-to-one correspondence between structural and functional edges, but are instead distributed and heterogeneous, with many functional relationships arising from nonoverlapping sets of anatomical connections. We also find that structural connections between high-degree hubs are disproportionately represented, suggesting that these connections are particularly important in establishing coherent functional networks. Altogether, these results demonstrate that the network organization of the cerebral cortex supports the emergence of diverse functional network configurations that often diverge from the underlying anatomical substrate.",
"title": ""
},
{
"docid": "neg:1840434_5",
"text": "BACKGROUND\nStatin therapy reduces low-density lipoprotein (LDL) cholesterol levels and the risk of cardiovascular events, but whether the addition of ezetimibe, a nonstatin drug that reduces intestinal cholesterol absorption, can reduce the rate of cardiovascular events further is not known.\n\n\nMETHODS\nWe conducted a double-blind, randomized trial involving 18,144 patients who had been hospitalized for an acute coronary syndrome within the preceding 10 days and had LDL cholesterol levels of 50 to 100 mg per deciliter (1.3 to 2.6 mmol per liter) if they were receiving lipid-lowering therapy or 50 to 125 mg per deciliter (1.3 to 3.2 mmol per liter) if they were not receiving lipid-lowering therapy. The combination of simvastatin (40 mg) and ezetimibe (10 mg) (simvastatin-ezetimibe) was compared with simvastatin (40 mg) and placebo (simvastatin monotherapy). The primary end point was a composite of cardiovascular death, nonfatal myocardial infarction, unstable angina requiring rehospitalization, coronary revascularization (≥30 days after randomization), or nonfatal stroke. The median follow-up was 6 years.\n\n\nRESULTS\nThe median time-weighted average LDL cholesterol level during the study was 53.7 mg per deciliter (1.4 mmol per liter) in the simvastatin-ezetimibe group, as compared with 69.5 mg per deciliter (1.8 mmol per liter) in the simvastatin-monotherapy group (P<0.001). The Kaplan-Meier event rate for the primary end point at 7 years was 32.7% in the simvastatin-ezetimibe group, as compared with 34.7% in the simvastatin-monotherapy group (absolute risk difference, 2.0 percentage points; hazard ratio, 0.936; 95% confidence interval, 0.89 to 0.99; P=0.016). Rates of prespecified muscle, gallbladder, and hepatic adverse effects and cancer were similar in the two groups.\n\n\nCONCLUSIONS\nWhen added to statin therapy, ezetimibe resulted in incremental lowering of LDL cholesterol levels and improved cardiovascular outcomes. Moreover, lowering LDL cholesterol to levels below previous targets provided additional benefit. (Funded by Merck; IMPROVE-IT ClinicalTrials.gov number, NCT00202878.).",
"title": ""
},
{
"docid": "neg:1840434_6",
"text": "Resistant hypertension-uncontrolled hypertension with 3 or more antihypertensive agents-is increasingly common in clinical practice. Clinicians should exclude pseudoresistant hypertension, which results from nonadherence to medications or from elevated blood pressure related to the white coat syndrome. In patients with truly resistant hypertension, thiazide diuretics, particularly chlorthalidone, should be considered as one of the initial agents. The other 2 agents should include calcium channel blockers and angiotensin-converting enzyme inhibitors for cardiovascular protection. An increasing body of evidence has suggested benefits of mineralocorticoid receptor antagonists, such as eplerenone and spironolactone, in improving blood pressure control in patients with resistant hypertension, regardless of circulating aldosterone levels. Thus, this class of drugs should be considered for patients whose blood pressure remains elevated after treatment with a 3-drug regimen to maximal or near maximal doses. Resistant hypertension may be associated with secondary causes of hypertension including obstructive sleep apnea or primary aldosteronism. Treating these disorders can significantly improve blood pressure beyond medical therapy alone. The role of device therapy for treating the typical patient with resistant hypertension remains unclear.",
"title": ""
},
{
"docid": "neg:1840434_7",
"text": "Sentiment classification is an important subject in text mining research, which concerns the application of automatic methods for predicting the orientation of sentiment present on text documents, with many applications on a number of areas including recommender and advertising systems, customer intelligence and information retrieval. In this paper, we provide a survey and comparative study of existing techniques for opinion mining including machine learning and lexicon-based approaches, together with evaluation metrics. Also cross-domain and cross-lingual approaches are explored. Experimental results show that supervised machine learning methods, such as SVM and naive Bayes, have higher precision, while lexicon-based methods are also very competitive because they require few effort in human-labeled document and isn't sensitive to the quantity and quality of the training dataset.",
"title": ""
},
{
"docid": "neg:1840434_8",
"text": "We present a user-centric approach for stream surface generation. Given a set of densely traced streamlines over the flow field, we design a sketch-based interface that allows users to draw simple strokes directly on top of the streamline visualization result. Based on the 2D stroke, we identify a 3D seeding curve and generate a stream surface that captures the flow pattern of streamlines at the outermost layer. Then, we remove the streamlines whose patterns are covered by the stream surface. Repeating this process, users can peel the flow by replacing the streamlines with customized surfaces layer by layer. Our sketch-based interface leverages an intuitive painting metaphor which most users are familiar with. We present results using multiple data sets to show the effectiveness of our approach, and discuss the limitations and future directions.",
"title": ""
},
{
"docid": "neg:1840434_9",
"text": "Compressed sensing is a technique to sample compressible signals below the Nyquist rate, whilst still allowing near optimal reconstruction of the signal. In this paper we present a theoretical analysis of the iterative hard thresholding algorithm when applied to the compressed sensing recovery problem. We show that the algorithm has the following properties (made more precise in the main text of the paper) • It gives near-optimal error guarantees. • It is robust to observation noise. • It succeeds with a minimum number of observations. • It can be used with any sampling operator for which the operator and its adjoint can be computed. • The memory requirement is linear in the problem size. Preprint submitted to Elsevier 28 January 2009 • Its computational complexity per iteration is of the same order as the application of the measurement operator or its adjoint. • It requires a fixed number of iterations depending only on the logarithm of a form of signal to noise ratio of the signal. • Its performance guarantees are uniform in that they only depend on properties of the sampling operator and signal sparsity.",
"title": ""
},
{
"docid": "neg:1840434_10",
"text": "This research strives for natural language moment retrieval in long, untrimmed video streams. The problem nevertheless is not trivial especially when a video contains multiple moments of interests and the language describes complex temporal dependencies, which often happens in real scenarios. We identify two crucial challenges: semantic misalignment and structural misalignment. However, existing approaches treat different moments separately and do not explicitly model complex moment-wise temporal relations. In this paper, we present Moment Alignment Network (MAN), a novel framework that unifies the candidate moment encoding and temporal structural reasoning in a single-shot feed-forward network. MAN naturally assigns candidate moment representations aligned with language semantics over different temporal locations and scales. Most importantly, we propose to explicitly model momentwise temporal relations as a structured graph and devise an iterative graph adjustment network to jointly learn the best structure in an end-to-end manner. We evaluate the proposed approach on two challenging public benchmarks Charades-STA and DiDeMo, where our MAN significantly outperforms the state-of-the-art by a large margin.",
"title": ""
},
{
"docid": "neg:1840434_11",
"text": "Vehicle theft has become a pervasive problem in metropolitan cities. The aim of our work is to reduce the vehicle and fuel theft with an alert given by commonly used smart phones. The modern vehicles are interconnected with computer systems so that the information can be obtained from vehicular sources and Internet services. This provides space for tracking the vehicle through smart phones. In our work, an Advanced Encryption Standard (AES) algorithm is implemented which integrates a smart phone with classical embedded systems to avoid vehicle theft.",
"title": ""
},
{
"docid": "neg:1840434_12",
"text": "Despite its linguistic complexity, the Horn of Africa region includes several major languages with more than 5 million speakers, some crossing the borders of multiple countries. All of these languages have official status in regions or nations and are crucial for development; yet computational resources for the languages remain limited or non-existent. Since these languages are complex morphologically, software for morphological analysis and generation is a necessary first step toward nearly all other applications. This paper describes a resource for morphological analysis and generation for three of the most important languages in the Horn of Africa, Amharic, Tigrinya, and Oromo. 1 Language in the Horn of Africa The Horn of Africa consists politically of four modern nations, Ethiopia, Somalia, Eritrea, and Djibouti. As in most of sub-Saharan Africa, the linguistic picture in the region is complex. The great majority of people are speakers of AfroAsiatic languages belonging to three sub-families: Semitic, Cushitic, and Omotic. Approximately 75% of the population of almost 100 million people are native speakers of four languages: the Cushitic languages Oromo and Somali and the Semitic languages Amharic and Tigrinya. Many others speak one or the other of these languages as second languages. All of these languages have official status at the national or regional level. All of the languages of the region, especially the Semitic languages, are characterized by relatively complex morphology. For such languages, nearly all forms of language technology depend on the existence of software for analyzing and generating word forms. As with most other subSaharan languages, this software has previously not been available. This paper describes a set of Python programs called HornMorpho that address this lack for three of the most important languages, Amharic, Tigrinya, and Oromo. 2 Morphological processingn 2.1 Finite state morphology Morphological analysis is the segmentation of words into their component morphemes and the assignment of grammatical morphemes to grammatical categories and lexical morphemes to lexemes. Morphological generation is the reverse process. Both processes relate a surface level to a lexical level. The relationship between the levels has traditionally been viewed within linguistics in terms of an ordered series of phonological rules. Within computational morphology, a very significant advance came with the demonstration that phonological rules could be implemented as finite state transducers (Kaplan and Kay, 1994) (FSTs) and that the rule ordering could be dispensed with using FSTs that relate the surface and lexical levels directly (Koskenniemi, 1983), so-called “twolevel” morphology. A second important advance was the recognition by Karttunen et al. (1992) that a cascade of composed FSTs could implement the two-level model. This made possible quite complex finite state systems, including ordered alternation rules representing context-sensitive variation in the phonological or orthographic shape of morphemes, the morphotactics characterizing the possible sequences of morphemes (in canonical form) for a given word class, and a lexicon. The key feature of such systems is that, even though the FSTs making up the cascade must be composed in a particular order, the result of composition is a single FST relating surface and lexical levels directly, as in two-level morphology. Because of the invertibility of FSTs, it is a simple matter to convert an analysis FST (surface input Figure 1: Basic architecture of lexical FSTs for morphological analysis and generation. Each rectangle represents an FST; the outermost rectangle is the full FST that is actually used for processing. “.o.” represents composition of FSTs, “+” concatenation of FSTs. to lexical output) to one that performs generation (lexical input to surface output). This basic architecture, illustrated in Figure 1, consisting of a cascade of composed FSTs representing (1) alternation rules and (2) morphotactics, including a lexicon of stems or roots, is the basis for the system described in this paper. We may also want to handle words whose roots or stems are not found in the lexicon, especially when the available set of known roots or stems is limited. In such cases the lexical component is replaced by a phonotactic component characterizing the possible shapes of roots or stems. Such a “guesser” analyzer (Beesley and Karttunen, 2003) analyzes words with unfamiliar roots or stems by positing possible roots or stems. 2.2 Semitic morphology These ideas have revolutionized computational morphology, making languages with complex word structure, such as Finnish and Turkish, far more amenable to analysis by traditional computational techniques. However, finite state morphology is inherently biased to view morphemes as sequences of characters or phones and words as concatenations of morphemes. This presents problems in the case of non-concatenative morphology, for example, discontinuous morphemes and the template morphology that characterizes Semitic languages such as Amharic and Tigrinya. The stem of a Semitic verb consists of a root, essentially a sequence of consonants, and a template that inserts other segments between the root consonants and possibly copies certain of the consonants. For example, the Amharic verb root sbr ‘break’ can combine with roughly 50 different templates to form stems in words such as y ̃b•l y1-sEbr-al ‘he breaks’, ° ̃¤ tEsEbbEr-E ‘it was broken’, ‰ ̃bw l-assEbb1r-Ew , ‘let me cause him to break something’, ̃§§” sEbabar-i ‘broken into many pieces’. A number of different additions to the basic FST framework have been proposed to deal with non-concatenative morphology, all remaining finite state in their complexity. A discussion of the advantages and drawbacks of these different proposals is beyond the scope of this paper. The approach used in our system is one first proposed by Amtrup (2003), based in turn on the well studied formalism of weighted FSTs. In brief, in Amtrup’s approach, each of the arcs in a transducer may be “weighted” with a feature structure, that is, a set of grammatical feature-value pairs. As the arcs in an FST are traversed, a set of feature-value pairs is accumulated by unifying the current set with whatever appears on the arcs along the path through the transducer. These feature-value pairs represent a kind of memory for the path that has been traversed but without the power of a stack. Any arc whose feature structure fails to unify with the current set of feature-value pairs cannot be traversed. The result of traversing such an FST during morphological analysis is not only an output character sequence, representing the root of the word, but a set of feature-value pairs that represents the grammatical structure of the input word. In the generation direction, processing begins with a root and a set of feature-value pairs, representing the desired grammatical structure of the output word, and the output is the surface wordform corresponding to the input root and grammatical structure. In Gasser (2009) we showed how Amtrup’s technique can be applied to the analysis and generation of Tigrinya verbs. For an alternate approach to handling the morphotactics of a subset of Amharic verbs, within the context of the Xerox finite state tools (Beesley and Karttunen, 2003), see Amsalu and Demeke (2006). Although Oromo, a Cushitic language, does not exhibit the root+template morphology that is typical of Semitic languages, it is also convenient to handle its morphology using the same technique because there are some long-distance dependencies and because it is useful to have the grammatical output that this approach yields for analysis.",
"title": ""
},
{
"docid": "neg:1840434_13",
"text": "A new magneto-electric dipole antenna with a unidirectional radiation pattern is proposed. A novel differential feeding structure is designed to provide an ultra-wideband impedance matching. A stable gain of 8.25±1.05 dBi is realized by introducing two slots in the magneto-electric dipole and using a rectangular box-shaped reflector, instead of a planar reflector. The antenna can achieve an impedance bandwidth of 114% for SWR ≤ 2 from 2.95 to 10.73 GHz. Radiation patterns with low cross polarization, low back radiation, fixing broadside direction mainbeam and symmetrical E- and H -plane patterns are obtained over the operating frequency range. Moreover, the correlation factor between the transmitting antenna input signal and the receiving antenna output signal is calculated for evaluating the time-domain characteristic. The proposed antenna, which is small in size, can be constructed easily by using PCB fabrication technique.",
"title": ""
},
{
"docid": "neg:1840434_14",
"text": "An overview is presented of the impact of NLO on today's daily life. While NLO researchers have promised many applications, only a few have changed our lives so far. This paper categorizes applications of NLO into three areas: improving lasers, interaction with materials, and information technology. NLO provides: coherent light of different wavelengths; multi-photon absorption for plasma-materials interaction; advanced spectroscopy and materials analysis; and applications to communications and sensors. Applications in information processing and storage seem less mature.",
"title": ""
},
{
"docid": "neg:1840434_15",
"text": "Heterogeneous cloud radio access networks (H-CRAN) is a new trend of SC that aims to leverage the heterogeneous and cloud radio access networks advantages. Low power remote radio heads (RRHs) are exploited to provide high data rates for users with high quality of service requirements (QoS), while high power macro base stations (BSs) are deployed for coverage maintenance and low QoS users support. However, the inter-tier interference between the macro BS and RRHs and energy efficiency are critical challenges that accompany resource allocation in H-CRAN. Therefore, we propose a centralized resource allocation scheme using online learning, which guarantees interference mitigation and maximizes energy efficiency while maintaining QoS requirements for all users. To foster the performance of such scheme with a model-free learning, we consider users' priority in resource blocks (RBs) allocation and compact state representation based learning methodology to enhance the learning process. Simulation results confirm that the proposed resource allocation solution can mitigate interference, increase energy and spectral efficiencies significantly, and maintain users' QoS requirements.",
"title": ""
},
{
"docid": "neg:1840434_16",
"text": "This paper is devoted to a new method of using Microsoft (MS) Kinect sensors for non-contact monitoring of breathing and heart rate estimation to detect possible medical and neurological disorders. Video sequences of facial features and thorax movements are recorded by MS Kinect image, depth and infrared sensors to enable their time analysis in selected regions of interest. The proposed methodology includes the use of computational methods and functional transforms for data selection, as well as their denoising, spectral analysis and visualization, in order to determine specific biomedical features. The results that were obtained verify the correspondence between the evaluation of the breathing frequency that was obtained from the image and infrared data of the mouth area and from the thorax movement that was recorded by the depth sensor. Spectral analysis of the time evolution of the mouth area video frames was also used for heart rate estimation. Results estimated from the image and infrared data of the mouth area were compared with those obtained by contact measurements by Garmin sensors (www.garmin.com). The study proves that simple image and depth sensors can be used to efficiently record biomedical multidimensional data with sufficient accuracy to detect selected biomedical features using specific methods of computational intelligence. The achieved accuracy for non-contact detection of breathing rate was 0.26% and the accuracy of heart rate estimation was 1.47% for the infrared sensor. The following results show how video frames with depth data can be used to differentiate different kinds of breathing. The proposed method enables us to obtain and analyse data for diagnostic purposes in the home environment or during physical activities, enabling efficient human-machine interaction.",
"title": ""
},
{
"docid": "neg:1840434_17",
"text": "Sequential learning of tasks using gradient descent leads to an unremitting decline in the accuracy of tasks for which training data is no longer available, termed catastrophic forgetting. Generative models have been explored as a means to approximate the distribution of old tasks and bypass storage of real data. Here we propose a cumulative closed-loop generator and embedded classifier using an AC-GAN architecture provided with external regularization by a small buffer. We evaluate incremental learning using a notoriously hard paradigm, “single headed learning,” in which each task is a disjoint subset of classes in the overall dataset, and performance is evaluated on all previous classes. First, we show that the variability contained in a small percentage of a dataset (memory buffer) accounts for a significant portion of the reported accuracy, both in multi-task and continual learning settings. Second, we show that using a generator to continuously output new images while training provides an up-sampling of the buffer, which prevents catastrophic forgetting and yields superior performance when compared to a fixed buffer. We achieve an average accuracy for all classes of 92.26% in MNIST and 76.15% in FASHION-MNIST after 5 tasks using GAN sampling with a buffer of only 0.17% of the entire dataset size. We compare to a network with regularization (EWC) which shows a deteriorated average performance of 29.19% (MNIST) and 26.5% (FASHION). The baseline of no regularization (plain gradient descent) performs at 99.84% (MNIST) and 99.79% (FASHION) for the last task, but below 3% for all previous tasks. Our method has very low long-term memory cost, the buffer, as well as negligible intermediate memory storage.",
"title": ""
},
{
"docid": "neg:1840434_18",
"text": "Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems.",
"title": ""
},
{
"docid": "neg:1840434_19",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
}
] |
1840435 | AnswerBus question answering system | [
{
"docid": "pos:1840435_0",
"text": "SHAPE ADJECTIVE COLOR DISEASE TEXT NARRATIVE* GENERAL-INFO DEFINITION USE EXPRESSION-ORIGIN HISTORY WHY-FAMOUS BIO ANTECEDENT INFLUENCE CONSEQUENT CAUSE-EFFECT METHOD-MEANS CIRCUMSTANCE-MEANS REASON EVALUATION PRO-CON CONTRAST RATING COUNSEL-ADVICE To create the QA Typology, we analyzed 17,384 questions and their answers (downloaded from answers.com); see (Gerber, 2001). The Typology contains 94 nodes, of which 47 are leaf nodes; a section of it appears in Figure 2. Each Typology node has been annotated with examples and typical patterns of expression of both Question and Answer, as indicated in Figure 3 for Proper-Person. Question examples Question templates Who was Johnny Mathis' high school track coach? who be <entity>'s <role> Who was Lincoln's Secretary of State? Who was President of Turkmenistan in 1994? who be <role> of <entity> Who is the composer of Eugene Onegin? Who is the CEO of General Electric? Actual answers Answer templates Lou Vasquez, track coach of...and Johnny Mathis <person>, <role> of <entity> Signed Saparmurad Turkmenbachy [Niyazov], <person> <role-title*> of <entity> president of Turkmenistan ...Turkmenistan’s President Saparmurad Niyazov... <entity>’s <role> <person> ...in Tchaikovsky's Eugene Onegin... <person>'s <entity> Mr. Jack Welch, GE chairman... <role-title> <person> ... <entity> <role> ...Chairman John Welch said ...GE's <subject>|<psv object> of related role-verb Figure 3. Portion of QA Typology node annotations for Proper-Person.",
"title": ""
}
] | [
{
"docid": "neg:1840435_0",
"text": "Advances in mobile technologies and devices has changed the way users interact with devices and other users. These new interaction methods and services are offered by the help of intelligent sensing capabilities, using context, location and motion sensors. However, indoor location sensing is mostly achieved by utilizing radio signal (Wi-Fi, Bluetooth, GSM etc.) and nearest neighbor identification. The most common algorithm adopted for Received Signal Strength (RSS)-based location sensing is K Nearest Neighbor (KNN), which calculates K nearest neighboring points to mobile users (MUs). Accordingly, in this paper, we aim to improve the KNN algorithm by enhancing the neighboring point selection by applying k-means clustering approach. In the proposed method, k-means clustering algorithm groups nearest neighbors according to their distance to mobile user. Then the closest group to the mobile user is used to calculate the MU's location. The evaluation results indicate that the performance of clustered KNN is closely tied to the number of clusters, number of neighbors to be clustered and the initiation of the center points in k-mean algorithm. Keywords-component; Received signal strength, k-Means, clustering, location estimation, personal digital assistant (PDA), wireless, indoor positioning",
"title": ""
},
{
"docid": "neg:1840435_1",
"text": "Vehicle behavior models and motion prediction are critical for advanced safety systems and safety system validation. This paper studies the effectiveness of convolutional recurrent neural networks in predicting action profiles for vehicles on highways. Instead of using hand-selected features, the neural network is given an image-like representation of the local scene. Convolutional neural networks and recurrence allow for the automatic identification of robust features based on spatial and temporal relations. Real driving data from the NGSIM dataset is used for the evaluation, and the resulting models are used to propagate simulated vehicle trajectories over ten-second horizons. Prediction models using Long Short Term Memory (LSTM) networks are shown to quantitatively and qualitatively outperform baseline methods in generating realistic vehicle trajectories. Predictions over driver actions are shown to depend heavily on previous action values. Efforts to improve performance through inclusion of information about the local scene proved unsuccessful, and will be the focus of further study.",
"title": ""
},
{
"docid": "neg:1840435_2",
"text": "Data privacy refers to ensuring that users keep control over access to information, whereas data accessibility refers to ensuring that information access is unconstrained. Conflicts between privacy and accessibility of data are natural to occur, and healthcare is a domain in which they are particularly relevant. In the present article, we discuss how blockchain technology, and smart contracts, could help in some typical scenarios related to data access, data management and data interoperability for the specific healthcare domain. We then propose the implementation of a large-scale information architecture to access Electronic Health Records (EHRs) based on Smart Contracts as information mediators. Our main contribution is the framing of data privacy and accessibility issues in healthcare and the proposal of an integrated blockchain based architecture.",
"title": ""
},
{
"docid": "neg:1840435_3",
"text": "In this paper, we present a novel approach for mining opinions from product reviews, where it converts opinion mining task to identify product features, expressions of opinions and relations between them. By taking advantage of the observation that a lot of product features are phrases, a concept of phrase dependency parsing is introduced, which extends traditional dependency parsing to phrase level. This concept is then implemented for extracting relations between product features and expressions of opinions. Experimental evaluations show that the mining task can benefit from phrase dependency parsing.",
"title": ""
},
{
"docid": "neg:1840435_4",
"text": "This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.",
"title": ""
},
{
"docid": "neg:1840435_5",
"text": "The cost efficiency and diversity of digital channels facilitate marketers’ frequent and interactive communication with their customers. Digital channels like the Internet, email, mobile phones and digital television offer new prospects to cultivate customer relationships. However, there are a few models explaining how digital marketing communication (DMC) works from a relationship marketing perspective, especially for cultivating customer loyalty. In this paper, we draw together previous research into an integrative conceptual model that explains how the key elements of DMC frequency and content of brand communication, personalization, and interactivity can lead to improved customer value, commitment, and loyalty.",
"title": ""
},
{
"docid": "neg:1840435_6",
"text": "We present a new approach for performing high-quality edge-preserving filtering of images and videos in real time. Our solution is based on a transform that defines an isometry between curves on the 2D image manifold in 5D and the real line. This transform preserves the geodesic distance between points on these curves, adaptively warping the input signal so that 1D edge-preserving filtering can be efficiently performed in linear time. We demonstrate three realizations of 1D edge-preserving filters, show how to produce high-quality 2D edge-preserving filters by iterating 1D-filtering operations, and empirically analyze the convergence of this process. Our approach has several desirable features: the use of 1D operations leads to considerable speedups over existing techniques and potential memory savings; its computational cost is not affected by the choice of the filter parameters; and it is the first edge-preserving filter to work on color images at arbitrary scales in real time, without resorting to subsampling or quantization. We demonstrate the versatility of our domain transform and edge-preserving filters on several real-time image and video processing tasks including edge-preserving filtering, depth-of-field effects, stylization, recoloring, colorization, detail enhancement, and tone mapping.",
"title": ""
},
{
"docid": "neg:1840435_7",
"text": "We are investigating the magnetic resonance imaging characteristics of magnetic nanoparticles (MNPs) that consist of an iron-oxide magnetic core coated with oleic acid (OA), then stabilized with a pluronic or tetronic block copolymer. Since pluronics and tetronics vary structurally, and also in the ratio of hydrophobic (poly[propylene oxide]) and hydrophilic (poly[ethylene oxide]) segments in the polymer chain and in molecular weight, it was hypothesized that their anchoring to the OA coating around the magnetic core could significantly influence the physical properties of MNPs, their interactions with biological environment following intravenous administration, and ability to localize to tumors. The amount of block copolymer associated with MNPs was seen to depend upon their molecular structures and influence the characteristics of MNPs. Pluronic F127-modified MNPs demonstrated sustained and enhanced contrast in the whole tumor, whereas that of Feridex IV was transient and confined to the tumor periphery. In conclusion, our pluronic F127-coated MNPs, which can also be loaded with anticancer agents for drug delivery, can be developed as an effective cancer theranostic agent, i.e. an agent with combined drug delivery and imaging properties.",
"title": ""
},
{
"docid": "neg:1840435_8",
"text": "Reduction in greenhouse gas emissions from transportation is essential in combating global warming and climate change. Eco-routing enables drivers to use the most eco-friendly routes and is effective in reducing vehicle emissions. The EcoTour system assigns eco-weights to a road network based on GPS and fuel consumption data collected from vehicles to enable ecorouting. Given an arbitrary source-destination pair in Denmark, EcoTour returns the shortest route, the fastest route, and the eco-route, along with statistics for the three routes. EcoTour also serves as a testbed for exploring advanced solutions to a range of challenges related to eco-routing.",
"title": ""
},
{
"docid": "neg:1840435_9",
"text": "Simulation optimization tools have the potential to provide an unprecedented level of support for the design and execution of operational control in Discrete Event Logistics Systems (DELS). While much of the simulation optimization literature has focused on developing and exploiting integration and syntactical interoperability between simulation and optimization tools, maximizing the effectiveness of these tools to support the design and execution of control behavior requires an even greater degree of interoperability than the current state of the art. In this paper, we propose a modeling methodology for operational control decision-making that can improve the interoperability between these two analysis methods and their associated tools in the context of DELS control. This methodology establishes a standard definition of operational control for both simulation and optimization methods and defines a mapping between decision variables (optimization) and execution mechanisms (simulation / base system). The goal is a standard for creating conforming simulation and optimization tools that are capable of meeting the functional needs of operational control decision making in DELS.",
"title": ""
},
{
"docid": "neg:1840435_10",
"text": "Structural health monitoring (SHM) of civil infrastructure using wireless smart sensor networks (WSSNs) has received significant public attention in recent years. The benefits of WSSNs are that they are low-cost, easy to install, and provide effective data management via on-board computation. This paper reports on the deployment and evaluation of a state-of-the-art WSSN on the new Jindo Bridge, a cable-stayed bridge in South Korea with a 344-m main span and two 70-m side spans. The central components of the WSSN deployment are the Imote2 smart sensor platforms, a custom-designed multimetric sensor boards, base stations, and software provided by the Illinois Structural Health Monitoring Project (ISHMP) Services Toolsuite. In total, 70 sensor nodes and two base stations have been deployed to monitor the bridge using an autonomous SHM application with excessive wind and vibration triggering the system to initiate monitoring. Additionally, the performance of the system is evaluated in terms of hardware durability, software stability, power consumption and energy harvesting capabilities. The Jindo Bridge SHM system constitutes the largest deployment of wireless smart sensors for civil infrastructure monitoring to date. This deployment demonstrates the strong potential of WSSNs for monitoring of large scale civil infrastructure.",
"title": ""
},
{
"docid": "neg:1840435_11",
"text": "This paper proposes a new robust adaptive beamformer a p plicable to microphone arrays. The proposed beamformer is a generalized sidelobe canceller (GSC) with a variable blocking matrix using coefficient-constrained adaptive digital filters (CCADFs). The CCADFs minimize leakage of target signal into the interference path of the GSC. Each coefficient of the CCADFs is constrained to avoid mistracking. The input signal to all the CCADFs is the output of a fixed beamformer. In multiple-input canceller, leaky ADFs are used to decrease undesirable target-signal cancellation. The proposed beamformer can allow large look-direction error with almost no degradation in interference-reduction performance and can be implemented with a small number of microphones. The maximum allowable look-direction error can be specified by the user. Simulation results show that the proposed beamformer designed to allow about 20 degrees of look-direction error can suppress interferences by more than 17dB. 1. I N T R O D U C T I O N Microphone arrays have been studied for teleconferencing, hearing aid, speech recognition, and speech enhancement, Especially adaptive microphone arrays are promising technique. They are based on adaptive beamforming such as generalized sidelobe canceller (GSC) and can attain high interference-reduction performance with a small number of microphones arranged in small space [l]. Adaptive beamformers extract the signal from the direction of arrival (DOA) specified by the steering vector, a parameter of beamforming. However, with classical adaptive beamformers based on GSC like simple Griffiths-Jim beamformer (GJBF)[2], target-signal cancellation occurs in the presence of steering-vector error. The error in the steering vector is inevitable with actual microphone arrays. Several signal processing techniques have been proposed to avoid the signal cancellation. These techniques are called robust beamformer after the fact that they are robust against errors. Unfortunately, they still have other problems such as degradation in interference-reduction performance, increase in the number of microphones, or mistracking. In this paper, a new robust adaptive beamformer to avoid these difficulties is proposed. The proposed beamformer uses a variable blocking matrix with coefficient-constrained adaptive digital filters (CCADFs). 0-7803-3 192-3/96 $5.0001996 IEEE 925 2. R O B U S T B E A M F O R M E R S BASED ON GENERALIZED SIDELOBE C A N C E L L E R A structure of the GSC with M microphones is shown in Fig.1. The GSC includes a fixed beamformer (FBF), multiple-input canceller (MC), and blocking matrix (BM). The FBF enhances the target signal. d(b ) is the output signal of the FBF a t sample index b, and zm(b) is the output signal of the m-th imicrophone ~ ( m = 0, ..., M). The MC adaptively subtracts the components correlated to the output signals ym(b) of the BM, froin the delayed output signal d ( k Q) of the FEIF, where Q is the number of delay samples for causality. The BM is a kind of spatial rejection filter. It rejects the target signal and passes interferences. If the input signals ym(b) of MC, which are the output signals of the BM, contain only interferences, the MC rejects the interferences and extract the target signal. However, if the target signal leaks in ym ( I C ) , target-signal cancellation occurs a t the MC. The BM in the simple GJBF is sensitive to the steering-vector error and easily leaks the target signal. The vector error is caused by microphone arrangement error, microphone sensitivity error, look-direction error, and so on. In the actual usage, the major factor of the steeringvector error is the look-direction error. This is because the target often changes the position by tlhe speaker movement. It is impossible to know t8he exact DOA of the target signal. Thus, the signal cancellation is an important problem. Several approaches to inhibit target-signal cancellation have been proposed[3]-[SI. Some robust beamformers introduce constraints to the adaptive algorithm in the MCs. Adaptive ailgorithms with leakage[3], noise injection[4], or norm conistraint[5] restrain the undesirable signal-cancellation. The beamformers pass the target signal in the presence of small steering-vector error. However, when they are designed to allow large look-direction error which is often required for microphone arrays, interference reduction is also restrained. Some robust beamformers use improved spatial filters in BM [3][6][7]. The filters eliminate the target signal in the presence of steering-vector error. However, they have been developed to allow small look-direction error. When they are designed to allow large look-direction error, the filters lose a lot of degrees of freedom for interference reduction. The loss in the degrees of freedom degrades interferencereduction performance or requires increase in the number of microphones. Target tracking or calibration is (another approach for",
"title": ""
},
{
"docid": "neg:1840435_12",
"text": "(Semi-)automatic mapping — also called (semi-)automatic alignment — of ontologies is a core task to achieve interoperability when two agents or services use different ontologies. In the existing literature, the focus ha s so far been on improving the quality of mapping results. We here consider QOM, Q uick Ontology Mapping, as a way to trade off between effectiveness (i.e. qu ality) and efficiency of the mapping generation algorithms. We show that QOM ha s lower run-time complexity than existing prominent approaches. Then, we show in experiments that this theoretical investigation translates into practical bene fits. While QOM gives up some of the possibilities for producing high-quality resu lts in favor of efficiency, our experiments show that this loss of quality is mar gin l.",
"title": ""
},
{
"docid": "neg:1840435_13",
"text": "Versu is a text-based simulationist interactive drama. Because it uses autonomous agents, the drama is highly replayable: you can play the same story from multiple perspectives, or assign different characters to the various roles. The architecture relies on the notion of a social practice to achieve coordination between the independent autonomous agents. A social practice describes a recurring social situation, and is a successor to the Schankian script. Social practices are implemented as reactive joint plans, providing affordances to the agents who participate in them. The practices never control the agents directly; they merely provide suggestions. It is always the individual agent who decides what to do, using utility-based reactive action selection.",
"title": ""
},
{
"docid": "neg:1840435_14",
"text": "Educational information mining is rising field that spotlights on breaking down educational information to create models for enhancing learning encounters and enhancing institutional viability. Expanding enthusiasm for information mining and educational frameworks, make educational information mining as another developing exploration group. Educational Data Mining intends to remove the concealed learning from expansive Educational databases with the utilization of procedures and apparatuses. Educational Data Mining grows new techniques to find information from Educational database and it is utilized for basic decision making in Educational framework. The knowledge is hidden among the Educational informational Sets and it is extractable through data mining techniques. It is essential to think about and dissect Educational information particularly understudies execution. Educational Data Mining (EDM) is the field of study relates about mining Educational information to discover intriguing examples and learning in Educational associations. This investigation is similarly worried about this subject, particularly, the understudies execution. This study investigates numerous components theoretically expected to influence student's performance in higher education, and finds a subjective model which best classifies and predicts the student's performance in light of related individual and phenomenal elements.",
"title": ""
},
{
"docid": "neg:1840435_15",
"text": "Question classification is very important for question answering. This paper present our research work on question classification through machine learning approach. In order to train the learning model, we designed a rich set of features that are predictive of question categories. An important component of question answering systems is question classification. The task of question classification is to predict the entity type of the answer of a natural language question. Question classification is typically done using machine learning techniques. Different lexical, syntactical and semantic features can be extracted from a question. In this work we combined lexical, syntactic and semantic features which improve the accuracy of classification. Furthermore, we adopted three different classifiers: Nearest Neighbors (NN), Naïve Bayes (NB), and Support Vector Machines (SVM) using two kinds of features: bag-of-words and bag-of n grams. Furthermore, we discovered that when we take SVM classifier and combine the semantic, syntactic, lexical feature we found that it will improve the accuracy of classification. We tested our proposed approaches on the well-known UIUC dataset and succeeded to achieve a new record on the accuracy of classification on this dataset.",
"title": ""
},
{
"docid": "neg:1840435_16",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "neg:1840435_17",
"text": "BACKGROUND\nRetrospective single-centre series have shown the feasibility of sentinel lymph-node (SLN) identification in endometrial cancer. We did a prospective, multicentre cohort study to assess the detection rate and diagnostic accuracy of the SLN procedure in predicting the pathological pelvic-node status in patients with early stage endometrial cancer.\n\n\nMETHODS\nPatients with International Federation of Gynecology and Obstetrics (FIGO) stage I-II endometrial cancer had pelvic SLN assessment via cervical dual injection (with technetium and patent blue), and systematic pelvic-node dissection. All lymph nodes were histopathologically examined and SLNs were serial sectioned and examined by immunochemistry. The primary endpoint was estimation of the negative predictive value (NPV) of sentinel-node biopsy per hemipelvis. This is an ongoing study for which recruitment has ended. The study is registered with ClinicalTrials.gov, number NCT00987051.\n\n\nFINDINGS\nFrom July 5, 2007, to Aug 4, 2009, 133 patients were enrolled at nine centres in France. No complications occurred after injection of technetium colloid and no anaphylactic reactions were noted after patent blue injection. No surgical complications were reported during SLN biopsy, including procedures that involved conversion to open surgery. At least one SLN was detected in 111 of the 125 eligible patients. 19 of 111 (17%) had pelvic-lymph-node metastases. Five of 111 patients (5%) had an associated SLN in the para-aortic area. Considering the hemipelvis as the unit of analysis, NPV was 100% (95% CI 95-100) and sensitivity 100% (63-100). Considering the patient as the unit of analysis, three patients had false-negative results (two had metastatic nodes in the contralateral pelvic area and one in the para-aortic area), giving an NPV of 97% (95% CI 91-99) and sensitivity of 84% (62-95). All three of these patients had type 2 endometrial cancer. Immunohistochemistry and serial sectioning detected metastases undiagnosed by conventional histology in nine of 111 (8%) patients with detected SLNs, representing nine of the 19 patients (47%) with metastases. SLN biopsy upstaged 10% of patients with low-risk and 15% of those with intermediate-risk endometrial cancer.\n\n\nINTERPRETATION\nSLN biopsy with cervical dual labelling could be a trade-off between systematic lymphadenectomy and no dissection at all in patients with endometrial cancer of low or intermediate risk. Moreover, our study suggests that SLN biopsy could provide important data to tailor adjuvant therapy.\n\n\nFUNDING\nDirection Interrégionale de Recherche Clinique, Ile-de-France, Assistance Publique-Hôpitaux de Paris.",
"title": ""
},
{
"docid": "neg:1840435_18",
"text": "How similar are the experiences of social rejection and physical pain? Extant research suggests that a network of brain regions that support the affective but not the sensory components of physical pain underlie both experiences. Here we demonstrate that when rejection is powerfully elicited--by having people who recently experienced an unwanted break-up view a photograph of their ex-partner as they think about being rejected--areas that support the sensory components of physical pain (secondary somatosensory cortex; dorsal posterior insula) become active. We demonstrate the overlap between social rejection and physical pain in these areas by comparing both conditions in the same individuals using functional MRI. We further demonstrate the specificity of the secondary somatosensory cortex and dorsal posterior insula activity to physical pain by comparing activated locations in our study with a database of over 500 published studies. Activation in these regions was highly diagnostic of physical pain, with positive predictive values up to 88%. These results give new meaning to the idea that rejection \"hurts.\" They demonstrate that rejection and physical pain are similar not only in that they are both distressing--they share a common somatosensory representation as well.",
"title": ""
}
] |
1840436 | Supply chain ontology: Review, analysis and synthesis | [
{
"docid": "pos:1840436_0",
"text": "This document presents the Enterprise Ontology a collection of terms and de nitions relevant to business enterprises It was developed as part of the Enterprise Project a collaborative e ort to provide a framework for enterprise modelling The Enterprise Ontology will serve as a basis for this framework which includes methods and a computer toolset for enterprise modelling We give an overview of the Enterprise Project elaborate on the intended use of the Ontology and discuss the process we went through to build it The scope of the Enterprise Ontology is limited to those core concepts required for the project however it is expected that it will appeal to a wider audience It should not be considered static during the course of the project the Enterprise Ontology will be further re ned and extended",
"title": ""
}
] | [
{
"docid": "neg:1840436_0",
"text": "What is the price of anarchy when unsplittable demands are ro uted selfishly in general networks with load-dependent edge dela ys? Motivated by this question we generalize the model of [14] to the case of weighted congestion games. We show that varying demands of users crucially affect the n ature of these games, which are no longer isomorphic to exact potential gam es, even for very simple instances. Indeed we construct examples where even a single-commodity (weighted) network congestion game may have no pure Nash equ ilibrium. On the other hand, we study a special family of networks (whic h we call the l-layered networks ) and we prove that any weighted congestion game on such a network with resource delays equal to the congestions, pos sesses a pure Nash Equilibrium. We also show how to construct one in pseudo-pol yn mial time. Finally, we give a surprising answer to the question above for s uch games: The price of anarchy of any weighted l-layered network congestion game with m edges and edge delays equal to the loads, is Θ (",
"title": ""
},
{
"docid": "neg:1840436_1",
"text": "The scientific approach to understand the nature of consciousness revolves around the study of human brain. Neurobiological studies that compare the nervous system of different species have accorded highest place to the humans on account of various factors that include a highly developed cortical area comprising of approximately 100 billion neurons, that are intrinsically connected to form a highly complex network. Quantum theories of consciousness are based on mathematical abstraction and Penrose-Hameroff Orch-OR Theory is one of the most promising ones. Inspired by Penrose-Hameroff Orch-OR Theory, Behrman et. al. (Behrman, 2006) have simulated a quantum Hopfield neural network with the structure of a microtubule. They have used an extremely simplified model of the tubulin dimers with each dimer represented simply as a qubit, a single quantum two-state system. The extension of this model to n-dimensional quantum states, or n-qudits presented in this work holds considerable promise for even higher mathematical abstraction in modelling consciousness systems.",
"title": ""
},
{
"docid": "neg:1840436_2",
"text": "This paper presents a deep architecture for learning a similarity metric on variablelength character sequences. The model combines a stack of character-level bidirectional LSTM’s with a Siamese architecture. It learns to project variablelength strings into a fixed-dimensional embedding space by using only information about the similarity between pairs of strings. This model is applied to the task of job title normalization based on a manually annotated taxonomy. A small data set is incrementally expanded and augmented with new sources of variance. The model learns a representation that is selective to differences in the input that reflect semantic differences (e.g., “Java developer” vs. “HR manager”) but also invariant to nonsemantic string differences (e.g., “Java developer” vs. “Java programmer”).",
"title": ""
},
{
"docid": "neg:1840436_3",
"text": "Development and testing of a compact 200-kV, 10-kJ/s industrial-grade power supply for capacitor charging applications is described. Pulse repetition rate (PRR) can be from single shot to 250 Hz, depending on the storage capacitance. Energy dosing (ED) topology enables high efficiency at switching frequency of up to 55 kHz using standard slow IGBTs. Circuit simulation examples are given. They clearly show zero-current switching at variable frequency during the charge set by the ED governing equations. Peak power drawn from the primary source is about only 60% higher than the average power, which lowers the stress on the input rectifier. Insulation design was assisted by electrostatic field analyses. Field plots of the main transformer insulation illustrate field distribution and stresses in it. Subsystem and system tests were performed including limited insulation life test. A precision, high-impedance, fast HV divider was developed for measuring voltages up to 250 kV with risetime down to 10 μs. The charger was successfully tested with stored energy of up to 550 J at discharge via a custom designed open-air spark gap at PRR up to 20 Hz (in bursts). Future work will include testing at customer sites.",
"title": ""
},
{
"docid": "neg:1840436_4",
"text": "BACKGROUND\nThe Internet Addiction Test (IAT) by Kimberly Young is one of the most utilized diagnostic instruments for Internet addiction. Although many studies have documented psychometric properties of the IAT, consensus on the optimal overall structure of the instrument has yet to emerge since previous analyses yielded markedly different factor analytic results.\n\n\nOBJECTIVE\nThe objective of this study was to evaluate the psychometric properties of the Italian version of the IAT, specifically testing the factor structure stability across cultures.\n\n\nMETHODS\nIn order to determine the dimensional structure underlying the questionnaire, both exploratory and confirmatory factor analyses were performed. The reliability of the questionnaire was computed by the Cronbach alpha coefficient.\n\n\nRESULTS\nData analyses were conducted on a sample of 485 college students (32.3%, 157/485 males and 67.7%, 328/485 females) with a mean age of 24.05 years (SD 7.3, range 17-47). Results showed 176/485 (36.3%) participants with IAT score from 40 to 69, revealing excessive Internet use, and 11/485 (1.9%) participants with IAT score from 70 to 100, suggesting significant problems because of Internet use. The IAT Italian version showed good psychometric properties, in terms of internal consistency and factorial validity. Alpha values were satisfactory for both the one-factor solution (Cronbach alpha=.91), and the two-factor solution (Cronbach alpha=.88 and Cronbach alpha=.79). The one-factor solution comprised 20 items, explaining 36.18% of the variance. The two-factor solution, accounting for 42.15% of the variance, showed 11 items loading on Factor 1 (Emotional and Cognitive Preoccupation with the Internet) and 7 items on Factor 2 (Loss of Control and Interference with Daily Life). Goodness-of-fit indexes (NNFI: Non-Normed Fit Index; CFI: Comparative Fit Index; RMSEA: Root Mean Square Error of Approximation; SRMR: Standardized Root Mean Square Residual) from confirmatory factor analyses conducted on a random half subsample of participants (n=243) were satisfactory in both factorial solutions: two-factor model (χ²₁₃₂= 354.17, P<.001, χ²/df=2.68, NNFI=.99, CFI=.99, RMSEA=.02 [90% CI 0.000-0.038], and SRMR=.07), and one-factor model (χ²₁₆₉=483.79, P<.001, χ²/df=2.86, NNFI=.98, CFI=.99, RMSEA=.02 [90% CI 0.000-0.039], and SRMR=.07).\n\n\nCONCLUSIONS\nOur study was aimed at determining the most parsimonious and veridical representation of the structure of Internet addiction as measured by the IAT. Based on our findings, support was provided for both single and two-factor models, with slightly strong support for the bidimensionality of the instrument. Given the inconsistency of the factor analytic literature of the IAT, researchers should exercise caution when using the instrument, dividing the scale into factors or subscales. Additional research examining the cross-cultural stability of factor solutions is still needed.",
"title": ""
},
{
"docid": "neg:1840436_5",
"text": "The diffusion of fake images and videos on social networks is a fast growing problem. Commercial media editing tools allow anyone to remove, add, or clone people and objects, to generate fake images. Many techniques have been proposed to detect such conventional fakes, but new attacks emerge by the day. Image-to-image translation, based on generative adversarial networks (GANs), appears as one of the most dangerous, as it allows one to modify context and semantics of images in a very realistic way. In this paper, we study the performance of several image forgery detectors against image-to-image translation, both in ideal conditions, and in the presence of compression, routinely performed upon uploading on social networks. The study, carried out on a dataset of 36302 images, shows that detection accuracies up to 95% can be achieved by both conventional and deep learning detectors, but only the latter keep providing a high accuracy, up to 89%, on compressed data.",
"title": ""
},
{
"docid": "neg:1840436_6",
"text": "This paper presents a multi-agent based framework for target tracking. We exploit the agent-oriented software paradigm with its characteristics that provide intelligent autonomous behavior together with a real time computer vision system to achieve high performance real time target tracking. The framework consists of four layers; interface, strategic, management, and operation layers. Interface layer receives from the user the tracking parameters such as the number and type of trackers and targets and type of the tracking environment, and then delivers these parameters to the subsequent layers. Strategic (decision making) layer is provided with a knowledge base of target tracking methodologies that are previously implemented by researchers in diverse target tracking applications and are proven successful. And by inference in the knowledge base using the user input a tracking methodology is chosen. Management layer is responsible for pursuing and controlling the tracking methodology execution. Operation layer represents the phases in the tracking methodology and is responsible for communicating with the real-time computer vision system to execute the algorithms in the phases. The framework is presented with a case study to show its ability to tackle the target tracking problem and its flexibility to solve the problem with different tracking parameters. This paper describes the ability of the agent-based framework to deploy any real-time vision system that fits in solving the target tracking problem. It is a step towards a complete open standard, real-time, agent-based framework for target tracking.",
"title": ""
},
{
"docid": "neg:1840436_7",
"text": "AIM\n'Othering' is described as a social process whereby a dominant group or person uses negative attributes to define and subordinate others. Literature suggests othering creates exclusive relationships and puts patients at risk for suboptimal care. A concept analysis delineating the properties of othering was conducted to develop knowledge to support inclusionary practices in nursing.\n\n\nDESIGN\nRodgers' Evolutionary Method for concept analysis guided this study.\n\n\nMETHODS\nThe following databases were searched spanning the years 1999-2015: CINAHL, PUBMED, PsychINFO and Google. Search terms included \"othering\", \"nurse\", \"other\", \"exclusion\" and \"patient\".\n\n\nRESULTS\nTwenty-eight papers were analyzed whereby definitions, related concepts and othering attributes were identified. Findings support that othering in nursing is a sequential process with a trajectory aimed at marginalization and exclusion, which in turn has a negative impact on patient care and professional relationships. Implications are discussed in terms of deriving practical solutions to disrupt othering. We conclude with a conceptual foundation designed to support inclusionary strategies in nursing.",
"title": ""
},
{
"docid": "neg:1840436_8",
"text": "The current approaches in terms of information security awareness and education are descriptive (i.e. they are not accomplishment-oriented nor do they recognize the factual/normative dualism); and current research has not explored the possibilities offered by motivation/behavioural theories. The first situation, level of descriptiveness, is deemed to be questionable because it may prove eventually that end-users fail to internalize target goals and do not follow security guidelines, for example ± which is inadequate. Moreover, the role of motivation in the area of information security is not considered seriously enough, even though its role has been widely recognised. To tackle such weaknesses, this paper constructs a conceptual foundation for information systems/organizational security awareness. The normative and prescriptive nature of end-user guidelines will be considered. In order to understand human behaviour, the behavioural science framework, consisting in intrinsic motivation, a theory of planned behaviour and a technology acceptance model, will be depicted and applied. Current approaches (such as the campaign) in the area of information security awareness and education will be analysed from the viewpoint of the theoretical framework, resulting in information on their strengths and weaknesses. Finally, a novel persuasion strategy aimed at increasing users' commitment to security guidelines is presented. spite of its significant role, seems to lack adequate foundations. To begin with, current approaches (e.g. McLean, 1992; NIST, 1995, 1998; Perry, 1985; Morwood, 1998), are descriptive in nature. Their inadequacy with respect to point of departure is partly recognized by McLean (1992), who points out that the approaches presented hitherto do not ensure learning. Learning can also be descriptive, however, which makes it an improper objective for security awareness. Learning and other concepts or approaches are not irrelevant in the case of security awareness, education or training, but these and other approaches need a reasoned contextual foundation as a point of departure in order to be relevant. For instance, if learning does not reflect the idea of prescriptiveness, the objective of the learning approach includes the fact that users may learn guidelines, but nevertheless fails to comply with them in the end. This state of affairs (level of descriptiveness[6]), is an inadequate objective for a security activity (the idea of prescriptiveness will be thoroughly considered in section 3). Also with regard to the content facet, the important role of motivation (and behavioural theories) with respect to the uses of security systems has been recognised (e.g. by NIST, 1998; Parker, 1998; Baskerville, 1989; Spruit, 1998; SSE-CMM, 1998a; 1998b; Straub, 1990; Straub et al., 1992; Thomson and von Solms, 1998; Warman, 1992) ± but only on an abstract level (as seen in Table I, the issue islevel (as seen in Table I, the issue is not considered from the viewpoint of any particular behavioural theory as yet). Motivation, however, is an issue where a deeper understanding may be of crucial relevance with respect to the effectiveness of approaches based on it. The role, possibilities and constraints of motivation and attitude in the effort to achieve positive results with respect to information security activities will be addressed at a conceptual level from the viewpoints of different theories. The scope of this paper is limited to the content aspects of awareness (Table I) and further end-users, thus resulting in a research contribution that is: a conceptual foundation and a framework for IS security awareness. This is achieved by addressing the following research questions: . What are the premises, nature and point of departure of awareness? . What is the role of attitude, and particularly motivation: the possibilities and requirements for achieving motivation/user acceptance and commitment with respect to information security tasks? . What approaches can be used as a framework to reach the stage of internalization and end-user",
"title": ""
},
{
"docid": "neg:1840436_9",
"text": "While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.",
"title": ""
},
{
"docid": "neg:1840436_10",
"text": "There is an underlying cascading behavior over road networks. Traffic cascading patterns are of great importance to easing traffic and improving urban planning. However, what we can observe is individual traffic conditions on different road segments at discrete time intervals, rather than explicit interactions or propagation (e.g., A→B) between road segments. Additionally, the traffic from multiple sources and the geospatial correlations between road segments make it more challenging to infer the patterns. In this paper, we first model the three-fold influences existing in traffic propagation and then propose a data-driven approach, which finds the cascading patterns through maximizing the likelihood of observed traffic data. As this is equivalent to a submodular function maximization problem, we solve it by using an approximate algorithm with provable near-optimal performance guarantees based on its submodularity. Extensive experiments on real-world datasets demonstrate the advantages of our approach in both effectiveness and efficiency.",
"title": ""
},
{
"docid": "neg:1840436_11",
"text": "In this letter, we propose an enhanced stereophonic acoustic echo suppression (SAES) algorithm incorporating spectral and temporal correlations in the short-time Fourier transform (STFT) domain. Unlike traditional stereophonic acoustic echo cancellation, SAES estimates the echo spectra in the STFT domain and uses a Wiener filter to suppress echo without performing any explicit double-talk detection. The proposed approach takes account of interdependencies among components in adjacent time frames and frequency bins, which enables more accurate estimation of the echo signals. Experimental results show that the proposed method yields improved performance compared to that of conventional SAES.",
"title": ""
},
{
"docid": "neg:1840436_12",
"text": "In recent years, wireless communication particularly in the front-end transceiver architecture has increased its functionality. This trend is continuously expanding and of particular is reconfigurable radio frequency (RF) front-end. A multi-band single chip architecture which consists of an array of switches and filters could simplify the complexity of the current superheterodyne architecture. In this paper, the design of a Single Pole Double Throw (SPDT) switch using 0.35μm Complementary Metal Oxide Semiconductor (CMOS) technology is discussed. The SPDT RF CMOS switch was then simulated in the range of frequency of 0-2GHz. At 2 GHz, the switch exhibits insertion loss of 1.153dB, isolation of 21.24dB, P1dB of 21.73dBm and IIP3 of 26.02dBm. Critical RF T/R switch characteristic such as insertion loss, isolation, power 1dB compression point and third order intercept point, IIP3 is discussed and compared with other type of switch designs. Pre and post layout simulation of the SPDT RF CMOS switch are also discussed to analyze the effect of parasitic capacitance between components' interconnection.",
"title": ""
},
{
"docid": "neg:1840436_13",
"text": "People counting is one of the key techniques in video surveillance. This task usually encounters many challenges in crowded environment, such as heavy occlusion, low resolution, imaging viewpoint variability, etc. Motivated by the success of R-CNN [1] on object detection, in this paper we propose a head detection based people counting method combining the Adaboost algorithm and the CNN. Unlike the R-CNN which uses the general object proposals as the inputs of CNN, our method uses the cascade Adaboost algorithm to obtain the head region proposals for CNN, which can greatly reduce the following classification time. Resorting to the strong ability of feature learning of the CNN, it is used as a feature extractor in this paper, instead of as a classifier as its commonlyused strategy. The final classification is done by a linear SVM classifier trained on the features extracted using the CNN feature extractor. Finally, the prior knowledge can be applied to post-process the detection results to increase the precision of head detection and the people count is obtained by counting the head detection results. A real classroom surveillance dataset is used to evaluate the proposed method and experimental results show that this method has good performance and outperforms the baseline methods, including deformable part model and cascade Adaboost methods. ∗Corresponding author Email address: gaocq@cqupt.edu.cn (Chenqiang Gao∗, Pei Li, Yajun Zhang, Jiang Liu, Lan Wang) Preprint submitted to Neurocomputing May 28, 2016",
"title": ""
},
{
"docid": "neg:1840436_14",
"text": "This article provides a novel analytical method of magnetic circuit on Axially-Laminated Anisotropic (ALA) rotor synchronous reluctance motor when the motor is magnetized on the d-axis. To simplify the calculation, the reluctance of stator magnet yoke and rotor magnetic laminations and leakage magnetic flux all are ignored. With regard to the uneven air-gap brought by the teeth and slots of the stator and rotor, the method resolves the problem with the equivalent air-gap length distribution function, and clarifies the magnetic circuit when the stator teeth are saturated or unsaturated. In order to conduct exact computation, the high order harmonics of the stator magnetic potential are also taken into account.",
"title": ""
},
{
"docid": "neg:1840436_15",
"text": "In this paper, we present a deep learning architecture which addresses the problem of 3D semantic segmentation of unstructured point clouds. Compared to previous work, we introduce grouping techniques which define point neighborhoods in the initial world space and the learned feature space. Neighborhoods are important as they allow to compute local or global point features depending on the spatial extend of the neighborhood. Additionally, we incorporate dedicated loss functions to further structure the learned point feature space: the pairwise distance loss and the centroid loss. We show how to apply these mechanisms to the task of 3D semantic segmentation of point clouds and report state-of-the-art performance on indoor and outdoor datasets. ar X iv :1 81 0. 01 15 1v 2 [ cs .C V ] 8 D ec 2 01 8 2 F. Engelmann et al.",
"title": ""
},
{
"docid": "neg:1840436_16",
"text": "We introduce cMix, a new approach to anonymous communications. Through a precomputation, the core cMix protocol eliminates all expensive realtime public-key operations—at the senders, recipients and mixnodes—thereby decreasing real-time cryptographic latency and lowering computational costs for clients. The core real-time phase performs only a few fast modular multiplications. In these times of surveillance and extensive profiling there is a great need for an anonymous communication system that resists global attackers. One widely recognized solution to the challenge of traffic analysis is a mixnet, which anonymizes a batch of messages by sending the batch through a fixed cascade of mixnodes. Mixnets can offer excellent privacy guarantees, including unlinkability of sender and receiver, and resistance to many traffic-analysis attacks that undermine many other approaches including onion routing. Existing mixnet designs, however, suffer from high latency in part because of the need for real-time public-key operations. Precomputation greatly improves the real-time performance of cMix, while its fixed cascade of mixnodes yields the strong anonymity guarantees of mixnets. cMix is unique in not requiring any real-time public-key operations by users. Consequently, cMix is the first mixing suitable for low latency chat for lightweight devices. Our presentation includes a specification of cMix, security arguments, anonymity analysis, and a performance comparison with selected other approaches. We also give benchmarks from our prototype.",
"title": ""
},
{
"docid": "neg:1840436_17",
"text": "Mood, attention and motivation co-vary with activity in the neuromodulatory systems of the brain to influence behaviour. These psychological states, mediated by neuromodulators, have a profound influence on the cognitive processes of attention, perception and, particularly, our ability to retrieve memories from the past and make new ones. Moreover, many psychiatric and neurodegenerative disorders are related to dysfunction of these neuromodulatory systems. Neurons of the brainstem nucleus locus coeruleus are the sole source of noradrenaline, a neuromodulator that has a key role in all of these forebrain activities. Elucidating the factors that control the activity of these neurons and the effect of noradrenaline in target regions is key to understanding how the brain allocates attention and apprehends the environment to select, store and retrieve information for generating adaptive behaviour.",
"title": ""
},
{
"docid": "neg:1840436_18",
"text": "This paper proposes a new face verification method that uses multiple deep convolutional neural networks (DCNNs) and a deep ensemble, that extracts two types of low dimensional but discriminative and high-level abstracted features from each DCNN, then combines them as a descriptor for face verification. Our DCNNs are built from stacked multi-scale convolutional layer blocks to present multi-scale abstraction. To train our DCNNs, we use different resolutions of triplets that consist of reference images, positive images, and negative images, and triplet-based loss function that maximize the ratio of distances between negative pairs and positive pairs and minimize the absolute distances between positive face images. A deep ensemble is generated from features extracted by each DCNN, and used as a descriptor to train the joint Bayesian learning and its transfer learning method. On the LFW, although we use only 198,018 images and only four different types of networks, the proposed method with the joint Bayesian learning and its transfer learning method achieved 98.33% accuracy. In addition to further increase the accuracy, we combine the proposed method and high dimensional LBP based joint Bayesian method, and achieved 99.08% accuracy on the LFW. Therefore, the proposed method helps to improve the accuracy of face verification when training data is insufficient to train DCNNs.",
"title": ""
},
{
"docid": "neg:1840436_19",
"text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.",
"title": ""
}
] |
1840437 | Design of an arm exoskeleton with scapula motion for shoulder rehabilitation | [
{
"docid": "pos:1840437_0",
"text": "Theoretical control algorithms are developed and an experimental system is described for 6-dof kinesthetic force/moment feedback to a human operator from a remote system. The remote system is a common six-axis slave manipulator with a force/torque sensor, while the haptic interface is a unique, cable-driven, seven-axis, force/moment-reflecting exoskeleton. The exoskeleton is used for input when motion commands are sent to the robot and for output when force/moment wrenches of contact are reflected to the human operator. This system exists at Wright-Patterson AFB. The same techniques are applicable to a virtual environment with physics models and general haptic interfaces.",
"title": ""
}
] | [
{
"docid": "neg:1840437_0",
"text": "Psychological research has shown that 'peak-end' effects influence people's retrospective evaluation of hedonic and affective experience. Rather than objectively reviewing the total amount of pleasure or pain during an experience, people's evaluation is shaped by the most intense moment (the peak) and the final moment (end). We describe an experiment demonstrating that peak-end effects can influence a user's preference for interaction sequences that are objectively identical in their overall requirements. Participants were asked to choose which of two interactive sequences of five pages they preferred. Both sequences required setting a total of 25 sliders to target values, and differed only in the distribution of the sliders across the five pages -- with one sequence intended to induce positive peak-end effects, the other negative. The study found that manipulating only the peak or the end of the series did not significantly change preference, but that a combined manipulation of both peak and end did lead to significant differences in preference, even though all series had the same overall effort.",
"title": ""
},
{
"docid": "neg:1840437_1",
"text": "This paper introduces Adaptive Computation Time (ACT), an algorithm that allows recurrent neural networks to learn how many computational steps to take between receiving an input and emitting an output. ACT requires minimal changes to the network architecture, is deterministic and differentiable, and does not add any noise to the parameter gradients. Experimental results are provided for four synthetic problems: determining the parity of binary vectors, applying sequences of binary logic operations, adding sequences of integers, and sorting sequences of real numbers. Overall performance is dramatically improved by the use of ACT, which successfully adapts the number of computational steps to the requirements of the problem. When applied to character-level language modelling on the Hutter prize Wikipedia dataset, ACT yields intriguing insight into the structure of the data, with more computation allocated to harder-to-predict transitions, such as spaces between words and ends of sentences. This suggests that ACT or other adaptive computation methods could be used to infer segment boundaries in sequence data.",
"title": ""
},
{
"docid": "neg:1840437_2",
"text": "Distributed computation is increasingly important for deep learning, and many deep learning frameworks provide built-in support for distributed training. This results in a tight coupling between the neural network computation and the underlying distributed execution, which poses a challenge for the implementation of new communication and aggregation strategies. We argue that decoupling the deep learning framework from the distributed execution framework enables the flexible development of new communication and aggregation strategies. Furthermore, we argue that Ray [12] provides a flexible set of distributed computing primitives that, when used in conjunction with modern deep learning libraries, enable the implementation of a wide range of gradient aggregation strategies appropriate for different computing environments. We show how these primitives can be used to address common problems, and demonstrate the performance benefits empirically.",
"title": ""
},
{
"docid": "neg:1840437_3",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "neg:1840437_4",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "neg:1840437_5",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "neg:1840437_6",
"text": "Currently there are many techniques based on information technology and communication aimed at assessing the performance of students. Data mining applied in the educational field (educational data mining) is one of the most popular techniques that are used to provide feedback with regard to the teaching-learning process. In recent years there have been a large number of open source applications in the area of educational data mining. These tools have facilitated the implementation of complex algorithms for identifying hidden patterns of information in academic databases. The main objective of this paper is to compare the technical features of three open source tools (RapidMiner, Knime and Weka) as used in educational data mining. These features have been compared in a practical case study on the academic records of three engineering programs in an Ecuadorian university. This comparison has allowed us to determine which tool is most effective in terms of predicting student performance.",
"title": ""
},
{
"docid": "neg:1840437_7",
"text": "Android is the most popular smartphone operating system with a market share of 80%, but as a consequence, also the platform most targeted by malware. To deal with the increasing number of malicious Android apps in the wild, malware analysts typically rely on analysis tools to extract characteristic information about an app in an automated fashion. While the importance of such tools has been addressed by the research community, the resulting prototypes remain limited in terms of analysis capabilities and availability. In this paper we present ANDRUBIS, a fully automated, publicly available and comprehensive analysis system for Android apps. ANDRUBIS combines static analysis with dynamic analysis on both Dalvik VM and system level, as well as several stimulation techniques to increase code coverage. With ANDRUBIS, we collected a dataset of over 1,000,000 Android apps, including 40% malicious apps. This dataset allows us to discuss trends in malware behavior observed from apps dating back as far as 2010, as well as to present insights gained from operating ANDRUBIS as a publicly available service for the past two years.",
"title": ""
},
{
"docid": "neg:1840437_8",
"text": "Recent developments in Artificial Intelligence (AI) have generated a steep interest from media and general public. As AI systems (e.g. robots, chatbots, avatars and other intelligent agents) are moving from being perceived as a tool to being perceived as autonomous agents and team-mates, an important focus of research and development is understanding the ethical impact of these systems. What does it mean for an AI system to make a decision? What are the moral, societal and legal consequences of their actions and decisions? Can an AI system be held accountable for its actions? How can these systems be controlled once their learning capabilities bring them into states that are possibly only remotely linked to their initial, designed, setup? Should such autonomous innovation in commercial systems even be allowed, and how should use and development be regulated? These and many other related questions are currently the focus of much attention. The way society and our systems will be able to deal with these questions will for a large part determine our level of trust, and ultimately, the impact of AI in society, and the existence of AI. Contrary to the frightening images of a dystopic future in media and popular fiction, where AI systems dominate the world and is mostly concerned with warfare, AI is already changing our daily lives mostly in ways that improve human health, safety, and productivity (Stone et al. 2016). This is the case in domain such as transportation; service robots; health-care; education; public safety and security; and entertainment. Nevertheless, and in order to ensure that those dystopic futures do not become reality, these systems must be introduced in ways that build trust and understanding, and respect human and civil rights. The need for ethical considerations in the development of intelligent interactive systems is becoming one of the main influential areas of research in the last few years, and has led to several initiatives both from researchers as from practitioners, including the IEEE initiative on Ethics of Autonomous Systems1, the Foundation for Responsible Robotics2, and the Partnership on AI3 amongst several others. As the capabilities for autonomous decision making grow, perhaps the most important issue to consider is the need to rethink responsibility (Dignum 2017). Whatever their level of autonomy and social awareness and their ability to learn, AI systems are artefacts, constructed by people to fulfil some goals. Theories, methods, algorithms are needed to integrate societal, legal and moral values into technological developments in AI, at all stages of development (analysis, design, construction, deployment and evaluation). These frameworks must deal both with the autonomic reasoning of the machine about such issues that we consider to have ethical impact, but most importantly, we need frameworks to guide design choices, to regulate the reaches of AI systems, to ensure proper data stewardship, and to help individuals determine their own involvement. Values are dependent on the socio-cultural context (Turiel 2002), and are often only implicit in deliberation processes, which means that methodologies are needed to elicit the values held by all the stakeholders, and to make these explicit can lead to better understanding and trust on artificial autonomous systems. That is, AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. Responsible Artificial Intelligence is about human responsibility for the development of intelligent systems along fundamental human principles and values, to ensure human flourishing and wellbeing in a sustainable world. In fact, Responsible AI is more than the ticking of some ethical ‘boxes’ in a report, or the development of some add-on features, or switch-off buttons in AI systems. Rather, responsibility is fundamental",
"title": ""
},
{
"docid": "neg:1840437_9",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "neg:1840437_10",
"text": "UNLABELLED\nL-3,4-Dihydroxy-6-(18)F-fluoro-phenyl-alanine ((18)F-FDOPA) is an amino acid analog used to evaluate presynaptic dopaminergic neuronal function. Evaluation of tumor recurrence in neurooncology is another application. Here, the kinetics of (18)F-FDOPA in brain tumors were investigated.\n\n\nMETHODS\nA total of 37 patients underwent 45 studies; 10 had grade IV, 10 had grade III, and 13 had grade II brain tumors; 2 had metastases; and 2 had benign lesions. After (18)F-DOPA was administered at 1.5-5 MBq/kg, dynamic PET images were acquired for 75 min. Images were reconstructed with iterative algorithms, and corrections for attenuation and scatter were applied. Images representing venous structures, the striatum, and tumors were generated with factor analysis, and from these, input and output functions were derived with simple threshold techniques. Compartmental modeling was applied to estimate rate constants.\n\n\nRESULTS\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors and the cerebellum but not the striatum. A 3-compartment model with corrections for tissue blood volume, metabolites, and partial volume appeared to be superior for describing (18)F-FDOPA kinetics in tumors and the striatum. A significant correlation was found between influx rate constant K and late uptake (standardized uptake value from 65 to 75 min), whereas the correlation of K with early uptake was weak. High-grade tumors had significantly higher transport rate constant k(1), equilibrium distribution volumes, and influx rate constant K than did low-grade tumors (P < 0.01). Tumor uptake showed a maximum at about 15 min, whereas the striatum typically showed a plateau-shaped curve. Patlak graphical analysis did not provide accurate parameter estimates. Logan graphical analysis yielded reliable estimates of the distribution volume and could separate newly diagnosed high-grade tumors from low-grade tumors.\n\n\nCONCLUSION\nA 2-compartment model was able to describe (18)F-FDOPA kinetics in tumors in a first approximation. A 3-compartment model with corrections for metabolites and partial volume could adequately describe (18)F-FDOPA kinetics in tumors, the striatum, and the cerebellum. This model suggests that (18)F-FDOPA was transported but not trapped in tumors, unlike in the striatum. The shape of the uptake curve appeared to be related to tumor grade. After an early maximum, high-grade tumors had a steep descending branch, whereas low-grade tumors had a slowly declining curve, like that for the cerebellum but on a higher scale.",
"title": ""
},
{
"docid": "neg:1840437_11",
"text": "Many types of data are best analyzed by fitting a curve using nonlinear regression, and computer programs that perform these calculations are readily available. Like every scientific technique, however, a nonlinear regression program can produce misleading results when used inappropriately. This article reviews the use of nonlinear regression in a practical and nonmathematical manner to answer the following questions: Why is nonlinear regression superior to linear regression of transformed data? How does nonlinear regression differ from polynomial regression and cubic spline? How do nonlinear regression programs work? What choices must an investigator make before performing nonlinear regression? What do the final results mean? How can two sets of data or two fits to one set of data be compared? What problems can cause the results to be wrong? This review is designed to demystify nonlinear regression so that both its power and its limitations will be appreciated.",
"title": ""
},
{
"docid": "neg:1840437_12",
"text": "Due to its high popularity and rich functionalities, the Portable Document Format (PDF) has become a major vector for malware propagation. To detect malicious PDF files, the first step is to extract and de-obfuscate Java Script codes from the document, for which an effective technique is yet to be created. However, existing static methods cannot de-obfuscate Java Script codes, existing dynamic methods bring high overhead, and existing hybrid methods introduce high false negatives. Therefore, in this paper, we present MPScan, a scanner that combines dynamic Java Script de-obfuscation and static malware detection. By hooking the Adobe Reader's native Java Script engine, Java Script source code and op-code can be extracted on the fly after the source code is parsed and then executed. We also perform a multilevel analysis on the resulting Java Script strings and op-code to detect malware. Our evaluation shows that regardless of obfuscation techniques, MPScan can effectively de-obfuscate and detect 98% malicious PDF samples.",
"title": ""
},
{
"docid": "neg:1840437_13",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "neg:1840437_14",
"text": "Eco-labels are part of a new wave of environmental policy that emphasizes information disclosure as a tool to induce environmentally friendly behavior by both firms and consumers. Little consensus exists as to whether eco-certified products are actually better than their conventional counterparts. This paper seeks to understand the link between eco-certification and product quality. We use data from three leading wine rating publications (Wine Advocate, Wine Enthusiast, and Wine Spectator) to assess quality for 74,148 wines produced in California between 1998 and 2009. Our results indicate that eco-certification is associated with a statistically significant increase in wine quality rating.",
"title": ""
},
{
"docid": "neg:1840437_15",
"text": "In this chapter we present a methodology for introducing and maintaining ontology based knowledge management applications into enterprises with a focus on Knowledge Processes and Knowledge Meta Processes. While the former process circles around the usage of ontologies, the latter process guides their initial set up. We illustrate our methodology by an example from a case study on skills management. The methodology serves as a scaffold for Part B “Ontology Engineering” of the handbook. It shows where more specific concerns of ontology engineering find their place and how they are related in the overall process.",
"title": ""
},
{
"docid": "neg:1840437_16",
"text": "The prevalence of obesity among children is high and is increasing. We know that obesity runs in families, with children of obese parents at greater risk of developing obesity than children of thin parents. Research on genetic factors in obesity has provided us with estimates of the proportion of the variance in a population accounted for by genetic factors. However, this research does not provide information regarding individual development. To design effective preventive interventions, research is needed to delineate how genetics and environmental factors interact in the etiology of childhood obesity. Addressing this question is especially challenging because parents provide both genes and environment for children. An enormous amount of learning about food and eating occurs during the transition from the exclusive milk diet of infancy to the omnivore's diet consumed by early childhood. This early learning is constrained by children's genetic predispositions, which include the unlearned preference for sweet tastes, salty tastes, and the rejection of sour and bitter tastes. Children also are predisposed to reject new foods and to learn associations between foods' flavors and the postingestive consequences of eating. Evidence suggests that children can respond to the energy density of the diet and that although intake at individual meals is erratic, 24-hour energy intake is relatively well regulated. There are individual differences in the regulation of energy intake as early as the preschool period. These individual differences in self-regulation are associated with differences in child-feeding practices and with children's adiposity. This suggests that child-feeding practices have the potential to affect children's energy balance via altering patterns of intake. Initial evidence indicates that imposition of stringent parental controls can potentiate preferences for high-fat, energy-dense foods, limit children's acceptance of a variety of foods, and disrupt children's regulation of energy intake by altering children's responsiveness to internal cues of hunger and satiety. This can occur when well-intended but concerned parents assume that children need help in determining what, when, and how much to eat and when parents impose child-feeding practices that provide children with few opportunities for self-control. Implications of these findings for preventive interventions are discussed.",
"title": ""
},
{
"docid": "neg:1840437_17",
"text": "A mathematical model of the system composed of two sensors, the semicircular canal and the sacculus, is suggested. The model is described by three lines of blocks, each line of which has the following structure: a biomechanical block, a mechanoelectrical transduction mechanism, and a block describing the hair cell ionic currents and membrane potential dynamics. The response of this system to various stimuli (head rotation under gravity and falling) is investigated. Identification of the model parameters was done with the experimental data obtained for the axolotl (Ambystoma tigrinum) at the Institute of Physiology, Autonomous University of Puebla, Mexico. Comparative analysis of the semicircular canal and sacculus membrane potentials is presented.",
"title": ""
},
{
"docid": "neg:1840437_18",
"text": "BACKGROUND\nThe use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality.\n\n\nOBJECTIVE\nTo describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine.\n\n\nRESULTS\nThe recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality.\n\n\nCONCLUSIONS\nAlthough the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.",
"title": ""
}
] |
1840438 | A Compact UWB Three-Way Power Divider | [
{
"docid": "pos:1840438_0",
"text": "This letter presents the design and measured performance of a microstrip three-way power combiner. The combiner is designed using the conventional Wilkinson topology with the extension to three outputs, which has been rarely considered for the design and fabrication of V-way combiners. It is shown that with an appropriate design approach, the main drawback reported with this topology (nonplanarity of the circuit when N > 2) can be minimized to have a negligible effect on the circuit performance and still allow an easy MIC or MHMIC fabrication.",
"title": ""
},
{
"docid": "pos:1840438_1",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
}
] | [
{
"docid": "neg:1840438_0",
"text": "In this letter, a miniature 0.97–1.53-GHz tunable four-pole bandpass filter with constant fractional bandwidth is demonstrated. The filter consists of three quarter-wavelength resonators and one half-wavelength resonator. By introducing cross-coupling, two transmission zeroes are generated and are located at both sides of the passband. Also, source–load coupling is employed to produce two extra transmission zeroes, resulting in a miniature (<inline-formula> <tex-math notation=\"LaTeX\">$0.09\\lambda _{{\\text {g}}}\\times 0.1\\lambda _{{\\text {g}}}$ </tex-math></inline-formula>) four-pole, four-transmission zero filter with high selectivity. The measured results show a tuning range of 0.97–1.53 GHz with an insertion loss of 4.2–2 dB and 1-dB fractional bandwidth of 5.5%. The four transmission zeroes change with the passband synchronously, ensuring high selectivity over a wide tuning range. The application areas are in software-defined radios in high-interference environments.",
"title": ""
},
{
"docid": "neg:1840438_1",
"text": "In this work we discuss the related challenges and describe an approach towards the fusion of state-of-the-art technologies from the Spoken Dialogue Systems (SDS) and the Semantic Web and Information Retrieval domains. We envision a dialogue system named LD-SDS that will support advanced, expressive, and engaging user requests, over multiple, complex, rich, and open-domain data sources that will leverage the wealth of the available Linked Data. Specifically, we focus on: a) improving the identification, disambiguation and linking of entities occurring in data sources and user input; b) offering advanced query services for exploiting the semantics of the data, with reasoning and exploratory capabilities; and c) expanding the typical information seeking dialogue model (slot filling) to better reflect real-world conversational search scenarios.",
"title": ""
},
{
"docid": "neg:1840438_2",
"text": "Optical coherence tomography (OCT) is used for non-invasive diagnosis of diabetic macular edema assessing the retinal layers. In this paper, we propose a new fully convolutional deep architecture, termed ReLayNet, for end-to-end segmentation of retinal layers and fluid masses in eye OCT scans. ReLayNet uses a contracting path of convolutional blocks (encoders) to learn a hierarchy of contextual features, followed by an expansive path of convolutional blocks (decoders) for semantic segmentation. ReLayNet is trained to optimize a joint loss function comprising of weighted logistic regression and Dice overlap loss. The framework is validated on a publicly available benchmark dataset with comparisons against five state-of-the-art segmentation methods including two deep learning based approaches to substantiate its effectiveness.",
"title": ""
},
{
"docid": "neg:1840438_3",
"text": "A national sample of 295 transgender adults and their nontransgender siblings were surveyed about demographics, perceptions of social support, and violence, harassment, and discrimination. Transwomen were older than the other 4 groups. Transwomen, transmen, and genderqueers were more highly educated than nontransgender sisters and nontransgender brothers, but did not have a corresponding higher income. Other demographic differences between groups were found in religion, geographic mobility, relationship status, and sexual orientation. Transgender people were more likely to experience harassment and discrimination than nontransgender sisters and nontransgender brothers. All transgender people perceived less social support from family than nontransgender sisters. This is the first study to compare trans people to nontrans siblings as a comparison group.",
"title": ""
},
{
"docid": "neg:1840438_4",
"text": "Interferons (IFNs) are the hallmark of the vertebrate antiviral system. Two of the three IFN families identified in higher vertebrates are now known to be important for antiviral defence in teleost fish. Based on the cysteine patterns, the fish type I IFN family can be divided into two subfamilies, which possibly interact with distinct receptors for signalling. The fish type II IFN family consists of two members, IFN-γ with similar functions to mammalian IFN-γ and a teleost specific IFN-γ related (IFN-γrel) molecule whose functions are not fully elucidated. These two type II IFNs also appear to bind to distinct receptors to exert their functions. It has become clear that fish IFN responses are mediated by the host pattern recognition receptors and an array of transcription factors including the IFN regulatory factors, the Jak/Stat proteins and the suppressor of cytokine signalling (SOCS) molecules.",
"title": ""
},
{
"docid": "neg:1840438_5",
"text": "This paper describes a method for generating sense-tagged data using Wikipedia as a source of sense annotations. Through word sense disambiguation experiments, we show that the Wikipedia-based sense annotations are reliable and can be used to construct accurate sense classifiers.",
"title": ""
},
{
"docid": "neg:1840438_6",
"text": "This paper illustrates the mechanical structure's spherical motion, kinematic matrices and achievable workspace of an exoskeleton upper limb device. The purpose of this paper is to assist individuals that have lost their upper limb motor functions by creating an exoskeleton device that does not require an external support; but still provides a large workspace. This allows for movement according to the Activities of Daily Living (ADL).",
"title": ""
},
{
"docid": "neg:1840438_7",
"text": "Sentence relation extraction aims to extract relational facts from sentences, which is an important task in natural language processing field. Previous models rely on the manually labeled supervised dataset. However, the human annotation is costly and limits to the number of relation and data size, which is difficult to scale to large domains. In order to conduct largely scaled relation extraction, we utilize an existing knowledge base to heuristically align with texts, which not rely on human annotation and easy to scale. However, using distant supervised data for relation extraction is facing a new challenge: sentences in the distant supervised dataset are not directly labeled and not all sentences that mentioned an entity pair can represent the relation between them. To solve this problem, we propose a novel model with reinforcement learning. The relation of the entity pair is used as distant supervision and guide the training of relation extractor with the help of reinforcement learning method. We conduct two types of experiments on a publicly released dataset. Experiment results demonstrate the effectiveness of the proposed method compared with baseline models, which achieves 13.36% improvement.",
"title": ""
},
{
"docid": "neg:1840438_8",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840438_9",
"text": "Parallel bit stream algorithms exploit the SWAR (SIMD within a register) capabilities of commodity processors in high-performance text processing applications such as UTF-8 to UTF-16 transcoding, XML parsing, string search and regular expression matching. Direct architectural support for these algorithms in future SWAR instruction sets could further increase performance as well as simplifying the programming task. A set of simple SWAR instruction set extensions are proposed for this purpose based on the principle of systematic support for inductive doubling as an algorithmic technique. These extensions are shown to significantly reduce instruction count in core parallel bit stream algorithms, often providing a 3X or better improvement. The extensions are also shown to be useful for SWAR programming in other application areas, including providing a systematic treatment for horizontal operations. An implementation model for these extensions involves relatively simple circuitry added to the operand fetch components in a pipelined processor.",
"title": ""
},
{
"docid": "neg:1840438_10",
"text": "Complex organizations exhibit surprising, nonlinear behavior. Although organization scientists have studied complex organizations for many years, a developing set of conceptual and computational tools makes possible new approaches to modeling nonlinear interactions within and between organizations. Complex adaptive system models represent a genuinely new way of simplifying the complex. They are characterized by four key elements: agents with schemata, self-organizing networks sustained by importing energy, coevolution to the edge of chaos, and system evolution based on recombination. New types of models that incorporate these elements will push organization science forward by merging empirical observation with computational agent-based simulation. Applying complex adaptive systems models to strategic management leads to an emphasis on building systems that can rapidly evolve effective adaptive solutions. Strategic direction of complex organizations consists of establishing and modifying environments within which effective, improvised, self-organized solutions can evolve. Managers influence strategic behavior by altering the fitness landscape for local agents and reconfiguring the organizational architecture within which agents adapt. (Complexity Theory; Organizational Evolution; Strategic Management) Since the open-systems view of organizations began to diffuse in the 1960s, comnplexity has been a central construct in the vocabulary of organization scientists. Open systems are open because they exchange resources with the environment, and they are systems because they consist of interconnected components that work together. In his classic discussion of hierarchy in 1962, Simon defined a complex system as one made up of a large number of parts that have many interactions (Simon 1996). Thompson (1967, p. 6) described a complex organization as a set of interdependent parts, which together make up a whole that is interdependent with some larger environment. Organization theory has treated complexity as a structural variable that characterizes both organizations and their environments. With respect to organizations, Daft (1992, p. 15) equates complexity with the number of activities or subsystems within the organization, noting that it can be measured along three dimensions. Vertical complexity is the number of levels in an organizational hierarchy, horizontal complexity is the number of job titles or departments across the organization, and spatial complexity is the number of geographical locations. With respect to environments, complexity is equated with the number of different items or elements that must be dealt with simultaneously by the organization (Scott 1992, p. 230). Organization design tries to match the complexity of an organization's structure with the complexity of its environment and technology (Galbraith 1982). The very first article ever published in Organization Science suggested that it is inappropriate for organization studies to settle prematurely into a normal science mindset, because organizations are enormously complex (Daft and Lewin 1990). What Daft and Lewin meant is that the behavior of complex systems is surprising and is hard to 1047-7039/99/1003/0216/$05.OO ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 Copyright ? 1999, Institute for Operations Research pp. 216-232 and the Management Sciences PHILIP ANDERSON Complexity Theory and Organization Science predict, because it is nonlinear (Casti 1994). In nonlinear systems, intervening to change one or two parameters a small amount can drastically change the behavior of the whole system, and the whole can be very different from the sum of the parts. Complex systems change inputs to outputs in a nonlinear way because their components interact with one another via a web of feedback loops. Gell-Mann (1994a) defines complexity as the length of the schema needed to describe and predict the properties of an incoming data stream by identifying its regularities. Nonlinear systems can difficult to compress into a parsimonious description: this is what makes them complex (Casti 1994). According to Simon (1996, p. 1), the central task of a natural science is to show that complexity, correctly viewed, is only a mask for simplicity. Both social scientists and people in organizations reduce a complex description of a system to a simpler one by abstracting out what is unnecessary or minor. To build a model is to encode a natural system into a formal system, compressing a longer description into a shorter one that is easier to grasp. Modeling the nonlinear outcomes of many interacting components has been so difficult that both social and natural scientists have tended to select more analytically tractable problems (Casti 1994). Simple boxes-andarrows causal models are inadequate for modeling systems with complex interconnections and feedback loops, even when nonlinear relations between dependent and independent variables are introduced by means of exponents, logarithms, or interaction terms. How else might we compress complex behavior so we can comprehend it? For Perrow (1967), the more complex an organization is, the less knowable it is and the more deeply ambiguous is its operation. Modem complexity theory suggests that some systems with many interactions among highly differentiated parts can produce surprisingly simple, predictable behavior, while others generate behavior that is impossible to forecast, though they feature simple laws and few actors. As Cohen and Stewart (1994) point out, normal science shows how complex effects can be understood from simple laws; chaos theory demonstrates that simple laws can have complicated, unpredictable consequences; and complexity theory describes how complex causes can produce simple effects. Since the mid-1980s, new approaches to modeling complex systems have been emerging from an interdisciplinary invisible college, anchored on the Santa Fe Institute (see Waldrop 1992 for a historical perspective). The agenda of these scholars includes identifying deep principles underlying a wide variety of complex systems, be they physical, biological, or social (Fontana and Ballati 1999). Despite somewhat frequent declarations that a new paradigm has emerged, it is still premature to declare that a science of complexity, or even a unified theory of complex systems, exists (Horgan 1995). Holland and Miller (1991) have likened the present situation to that of evolutionary theory before Fisher developed a mathematical theory of genetic selection. This essay is not a review of the emerging body of research in complex systems, because that has been ably reviewed many times, in ways accessible to both scholars and managers. Table 1 describes a number of recent, prominent books and articles that inform this literature; Heylighen (1997) provides an excellent introductory bibliography, with a more comprehensive version available on the Internet at http://pespmcl.vub.ac.be/ Evocobib. html. Organization science has passed the point where we can regard as novel a summary of these ideas or an assertion that an empirical phenomenon is consistent with them (see Browning et al. 1995 for a pathbreaking example). Six important insights, explained at length in the works cited in Table 1, should be regarded as well-established scientifically. First, many dynamical systems (whose state at time t determines their state at time t + 1) do not reach either a fixed-point or a cyclical equilibrium (see Dooley and Van de Ven's paper in this issue). Second, processes that appear to be random may be chaotic, revolving around identifiable types of attractors in a deterministic way that seldom if ever return to the same state. An attractor is a limited area in a system's state space that it never departs. Chaotic systems revolve around \"strange attractors,\" fractal objects that constrain the system to a small area of its state space, which it explores in a neverending series that does not repeat in a finite amount of time. Tests exist that can establish whether a given process is random or chaotic (Koput 1997, Ott 1993). Similarly, time series that appear to be random walks may actually be fractals with self-reinforcing trends (Bar-Yam 1997). Third, the behavior of complex processes can be quite sensitive to small differences in initial conditions, so that two entities with very similar initial states can follow radically divergent paths over time. Consequently, historical accidents may \"tip\" outcomes strongly in a particular direction (Arthur 1989). Fourth, complex systems resist simple reductionist analyses, because interconnections and feedback loops preclude holding some subsystems constant in order to study others in isolation. Because descriptions at multiple scales are necessary to identify how emergent properties are produced (Bar-Yam 1997), reductionism and holism are complementary strategies in analyzing such systems (Fontana and Ballati ORGANIZATION SCIENCE/Vol. 10, No. 3, May-June 1999 217 PHILIP ANDERSON Complexity Theory and Organization Science Table 1 Selected Resources that Provide an Overview of Complexity Theory Allison and Kelly, 1999 Written for managers, this book provides an overview of major themes in complexity theory and discusses practical applications rooted in-experiences at firms such as Citicorp. Bar-Yam, 1997 A very comprehensive introduction for mathematically sophisticated readers, the book discusses the major computational techniques used to analyze complex systems, including spin-glass models, cellular automata, simulation methodologies, and fractal analysis. Models are developed to describe neural networks, protein folding, developmental biology, and the evolution of human civilization. Brown and Eisenhardt, 1998 Although this book is not an introduction to complexity theory, a series of small tables throughout the text introduces and explains most of the important concepts. The purpose of the book is to view stra",
"title": ""
},
{
"docid": "neg:1840438_11",
"text": "Autonomous navigation has become an increasingly popular machine learning application. Recent advances in deep learning have also brought huge improvements to autonomous navigation. However, prior outdoor autonomous navigation methods depended on various expensive sensors or expensive and sometimes erroneously labeled real data. In this paper, we propose an autonomous navigation method that does not require expensive labeled real images and uses only a relatively inexpensive monocular camera. Our proposed method is based on (1) domain adaptation with an adversarial learning framework and (2) exploiting synthetic data from a simulator. To the best of the authors’ knowledge, this is the first work to apply domain adaptation with adversarial networks to autonomous navigation. We present empirical results on navigation in outdoor courses using an unmanned aerial vehicle. The performance of our method is comparable to that of a supervised model with labeled real data, although our method does not require any label information for the real data. Our proposal includes a theoretical analysis that supports the applicability of our approach.",
"title": ""
},
{
"docid": "neg:1840438_12",
"text": "Theory and research suggest that people can increase their happiness through simple intentional positive activities, such as expressing gratitude or practicing kindness. Investigators have recently begun to study the optimal conditions under which positive activities increase happiness and the mechanisms by which these effects work. According to our positive-activity model, features of positive activities (e.g., their dosage and variety), features of persons (e.g., their motivation and effort), and person-activity fit moderate the effect of positive activities on well-being. Furthermore, the model posits four mediating variables: positive emotions, positive thoughts, positive behaviors, and need satisfaction. Empirical evidence supporting the model and future directions are discussed.",
"title": ""
},
{
"docid": "neg:1840438_13",
"text": "It was predicted that high self-esteem Ss (HSEs) would rationalize an esteem-threatening decision less than low self-esteem Ss (LSEs), because HSEs presumably had more favorable self-concepts with which to affirm, and thus repair, their overall sense of self-integrity. This prediction was supported in 2 experiments within the \"free-choice\" dissonance paradigm--one that manipulated self-esteem through personality feedback and the other that varied it through selection of HSEs and LSEs, but only when Ss were made to focus on their self-concepts. A 3rd experiment countered an alternative explanation of the results in terms of mood effects that may have accompanied the experimental manipulations. The results were discussed in terms of the following: (a) their support for a resources theory of individual differences in resilience to self-image threats--an extension of self-affirmation theory, (b) their implications for self-esteem functioning, and (c) their implications for the continuing debate over self-enhancement versus self-consistency motivation.",
"title": ""
},
{
"docid": "neg:1840438_14",
"text": "The authors investigate the interplay between answer quality and answer speed across question types in community question-answering sites (CQAs). The research questions addressed are the following: (a) How do answer quality and answer speed vary across question types? (b) How do the relationships between answer quality and answer speed vary across question types? (c) How do the best quality answers and the fastest answers differ in terms of answer quality and answer speed across question types? (d) How do trends in answer quality vary over time across question types? From the posting of 3,000 questions in six CQAs, 5,356 answers were harvested and analyzed. There was a significant difference in answer quality and answer speed across question types, and there were generally no significant relationships between answer quality and answer speed. The best quality answers had better overall answer quality than the fastest answers but generally took longer to arrive. In addition, although the trend in answer quality had been mostly random across all question types, the quality of answers appeared to improve gradually when given time. By highlighting the subtle nuances in answer quality and answer speed across question types, this study is an attempt to explore a territory of CQA research that has hitherto been relatively uncharted.",
"title": ""
},
{
"docid": "neg:1840438_15",
"text": "In this article, a case is made for improving the school success of ethnically diverse students through culturally responsive teaching and for preparing teachers in preservice education programs with the knowledge, attitudes, and skills needed to do this. The ideas presented here are brief sketches of more thorough explanations included in my recent book, Culturally Responsive Teaching: Theory, Research, and Practice (2000). The specific components of this approach to teaching are based on research findings, theoretical claims, practical experiences, and personal stories of educators researching and working with underachieving African, Asian, Latino, and Native American students. These data were produced by individuals from a wide variety of disciplinary backgrounds including anthropology, sociology, psychology, sociolinguistics, communications, multicultural education, K-college classroom teaching, and teacher education. Five essential elements of culturally responsive teaching are examined: developing a knowledge base about cultural diversity, including ethnic and cultural diversity content in the curriculum, demonstrating caring and building learning communities, communicating with ethnically diverse students, and responding to ethnic diversity in the delivery of instruction. Culturally responsive teaching is defined as using the cultural characteristics, experiences, and perspectives of ethnically diverse students as conduits for teaching them more effectively. It is based on the assumption that when academic knowledge and skills are situated within the lived experiences and frames of reference of students, they are more personally meaningful, have higher interest appeal, and are learned more easily and thoroughly (Gay, 2000). As a result, the academic achievement of ethnically diverse students will improve when they are taught through their own cultural and experiential filters (Au & Kawakami, 1994; Foster, 1995; Gay, 2000; Hollins, 1996; Kleinfeld, 1975; Ladson-Billings, 1994, 1995).",
"title": ""
},
{
"docid": "neg:1840438_16",
"text": "Investigation of the hippocampus has historically focused on computations within the trisynaptic circuit. However, discovery of important anatomical and functional variability along its long axis has inspired recent proposals of long-axis functional specialization in both the animal and human literatures. Here, we review and evaluate these proposals. We suggest that various long-axis specializations arise out of differences between the anterior (aHPC) and posterior hippocampus (pHPC) in large-scale network connectivity, the organization of entorhinal grid cells, and subfield compositions that bias the aHPC and pHPC towards pattern completion and separation, respectively. The latter two differences give rise to a property, reflected in the expression of multiple other functional specializations, of coarse, global representations in anterior hippocampus and fine-grained, local representations in posterior hippocampus.",
"title": ""
},
{
"docid": "neg:1840438_17",
"text": "The statistical properties of Clarke's fading model with a finite number of sinusoids are analyzed, and an improved reference model is proposed for the simulation of Rayleigh fading channels. A novel statistical simulation model for Rician fading channels is examined. The new Rician fading simulation model employs a zero-mean stochastic sinusoid as the specular (line-of-sight) component, in contrast to existing Rician fading simulators that utilize a non-zero deterministic specular component. The statistical properties of the proposed Rician fading simulation model are analyzed in detail. It is shown that the probability density function of the Rician fading phase is not only independent of time but also uniformly distributed over [-pi, pi). This property is different from that of existing Rician fading simulators. The statistical properties of the new simulators are confirmed by extensive simulation results, showing good agreement with theoretical analysis in all cases. An explicit formula for the level-crossing rate is derived for general Rician fading when the specular component has non-zero Doppler frequency",
"title": ""
},
{
"docid": "neg:1840438_18",
"text": "The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi agent systems. RenderBots are individual agents each of which in general represents one stroke. They form a multi agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having each RenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.",
"title": ""
},
{
"docid": "neg:1840438_19",
"text": "A QR code is a special type of barcode that can encode information like numbers, letters, and any other characters. The capacity of a given QR code depends on the version and error correction level, as also the data type which are encoded. A QR code framework for mobile phone applications by exploiting the spectral diversity afforded by the cyan (C), magenta (M), and yellow (Y) print colorant channels commonly used for color printing and the complementary red (R), green (G), and blue (B) channels, which captures the color images had been proposed. Specifically, this spectral diversity to realize a threefold increase in the data rate by encoding independent data the C, Y, and M channels and decoding the data from the complementary R, G, and B channels. In most cases ReedSolomon error correction codes will be used for generating error correction codeword‟s and also to increase the interference cancellation rate. Experimental results will show that the proposed framework successfully overcomes both single and burst errors and also providing a low bit error rate and a high decoding rate for each of the colorant channels when used with a corresponding error correction scheme. Finally proposed system was successfully synthesized using QUARTUS II EDA tools.",
"title": ""
}
] |
1840439 | Performance of a Precoding MIMO System for Decentralized Multiuser Indoor Visible Light Communications | [
{
"docid": "pos:1840439_0",
"text": "The use of space-division multiple access (SDMA) in the downlink of a multiuser multiple-input, multiple-output (MIMO) wireless communications network can provide a substantial gain in system throughput. The challenge in such multiuser systems is designing transmit vectors while considering the co-channel interference of other users. Typical optimization problems of interest include the capacity problem - maximizing the sum information rate subject to a power constraint-or the power control problem-minimizing transmitted power such that a certain quality-of-service metric for each user is met. Neither of these problems possess closed-form solutions for the general multiuser MIMO channel, but the imposition of certain constraints can lead to closed-form solutions. This paper presents two such constrained solutions. The first, referred to as \"block-diagonalization,\" is a generalization of channel inversion when there are multiple antennas at each receiver. It is easily adapted to optimize for either maximum transmission rate or minimum power and approaches the optimal solution at high SNR. The second, known as \"successive optimization,\" is an alternative method for solving the power minimization problem one user at a time, and it yields superior results in some (e.g., low SNR) situations. Both of these algorithms are limited to cases where the transmitter has more antennas than all receive antennas combined. In order to accommodate more general scenarios, we also propose a framework for coordinated transmitter-receiver processing that generalizes the two algorithms to cases involving more receive than transmit antennas. While the proposed algorithms are suboptimal, they lead to simpler transmitter and receiver structures and allow for a reasonable tradeoff between performance and complexity.",
"title": ""
}
] | [
{
"docid": "neg:1840439_0",
"text": "Advanced persistent threats (APTs) pose a significant risk to nearly every infrastructure. Due to the sophistication of these attacks, they are able to bypass existing security systems and largely infiltrate the target network. The prevention and detection of APT campaigns is also challenging, because of the fact that the attackers constantly change and evolve their advanced techniques and methods to stay undetected. In this paper we analyze 22 different APT reports and give an overview of the used techniques and methods. The analysis is focused on the three main phases of APT campaigns that allow to identify the relevant characteristics of such attacks. For each phase we describe the most commonly used techniques and methods. Through this analysis we could reveal different relevant characteristics of APT campaigns, for example that the usage of 0-day exploit is not common for APT attacks. Furthermore, the analysis shows that the dumping of credentials is a relevant step in the lateral movement phase for most APT campaigns. Based on the identified characteristics, we also propose concrete prevention and detection approaches that make it possible to identify crucial malicious activities that are performed during APT campaigns.",
"title": ""
},
{
"docid": "neg:1840439_1",
"text": "This paper presents the design and implementation of a Class EF2 inverter and Class EF2 rectifier for two -W wireless power transfer (WPT) systems, one operating at 6.78 MHz and the other at 27.12 MHz. It will be shown that the Class EF2 circuits can be designed to have beneficial features for WPT applications such as reduced second-harmonic component and lower total harmonic distortion, higher power-output capability, reduction in magnetic core requirements and operation at higher frequencies in rectification compared to other circuit topologies. A model will first be presented to analyze the circuits and to derive values of its components to achieve optimum switching operation. Additional analysis regarding harmonic content, magnetic core requirements and open-circuit protection will also be performed. The design and implementation process of the two Class-EF2-based WPT systems will be discussed and compared to an equivalent Class-E-based WPT system. Experimental results will be provided to confirm validity of the analysis. A dc-dc efficiency of 75% was achieved with Class-EF2-based systems.",
"title": ""
},
{
"docid": "neg:1840439_2",
"text": "Manual classification of brain tumor is time devastating and bestows ambiguous results. Automatic image classification is emergent thriving research area in medical field. In the proposed methodology, features are extracted from raw images which are then fed to ANFIS (Artificial neural fuzzy inference system).ANFIS being neuro-fuzzy system harness power of both hence it proves to be a sophisticated framework for multiobject classification. A comprehensive feature set and fuzzy rules are selected to classify an abnormal image to the corresponding tumor type. This proposed technique is fast in execution, efficient in classification and easy in implementation.",
"title": ""
},
{
"docid": "neg:1840439_3",
"text": "Received Feb 14, 2017 Revised Apr 14, 2017 Accepted Apr 28, 2017 This paper proposes maximum boost control for 7-level z-source cascaded h-bridge inverter and their affiliation between voltage boost gain and modulation index. Z-source network avoids the usage of external dc-dc boost converter and improves output voltage with minimised harmonic content. Z-source network utilises distinctive LC impedance combination with 7-level cascaded inverter and it conquers the conventional voltage source inverter. The maximum boost controller furnishes voltage boost and maintain constant voltage stress across power switches, which provides better output voltage with variation of duty cycles. Single phase 7-level z-source cascaded inverter simulated using matlab/simulink. Keyword:",
"title": ""
},
{
"docid": "neg:1840439_4",
"text": "In this paper, we presented a new model for cyber crime investigation procedure which is as follows: readiness phase, consulting with profiler, cyber crime classification and investigation priority decision, damaged cyber crime scene investigation, analysis by crime profiler, suspects tracking, injurer cyber crime scene investigation, suspect summon, cyber crime logical reconstruction, writing report.",
"title": ""
},
{
"docid": "neg:1840439_5",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "neg:1840439_6",
"text": "Some soluble phosphate salts, heavily used in agriculture as highly effective phosphorus (P) fertilizers, cause surface water eutrophication, while solid phosphates are less effective in supplying the nutrient P. In contrast, synthetic apatite nanoparticles could hypothetically supply sufficient P nutrients to crops but with less mobility in the environment and with less bioavailable P to algae in comparison to the soluble counterparts. Thus, a greenhouse experiment was conducted to assess the fertilizing effect of synthetic apatite nanoparticles on soybean (Glycine max). The particles, prepared using one-step wet chemical method, were spherical in shape with diameters of 15.8 ± 7.4 nm and the chemical composition was pure hydroxyapatite. The data show that application of the nanoparticles increased the growth rate and seed yield by 32.6% and 20.4%, respectively, compared to those of soybeans treated with a regular P fertilizer (Ca(H2PO4)2). Biomass productions were enhanced by 18.2% (above-ground) and 41.2% (below-ground). Using apatite nanoparticles as a new class of P fertilizer can potentially enhance agronomical yield and reduce risks of water eutrophication.",
"title": ""
},
{
"docid": "neg:1840439_7",
"text": "As modern societies become more dependent on IT services, the potential impact both of adversarial cyberattacks and non-adversarial service management mistakes grows. This calls for better cyber situational awareness-decision-makers need to know what is going on. The main focus of this paper is to examine the information elements that need to be collected and included in a common operational picture in order for stakeholders to acquire cyber situational awareness. This problem is addressed through a survey conducted among the participants of a national information assurance exercise conducted in Sweden. Most participants were government officials and employees of commercial companies that operate critical infrastructure. The results give insight into information elements that are perceived as useful, that can be contributed to and required from other organizations, which roles and stakeholders would benefit from certain information, and how the organizations work with creating cyber common operational pictures today. Among findings, it is noteworthy that adversarial behavior is not perceived as interesting, and that the respondents in general focus solely on their own organization.",
"title": ""
},
{
"docid": "neg:1840439_8",
"text": "About nine billion people in the world are deaf and dumb. The communication between a deaf and hearing person poses to be a serious problem compared to communication between blind and normal visual people. This creates a very little room for them with communication being a fundamental aspect of human life. The blind people can talk freely by means of normal language whereas the deaf-dumb have their own manual-visual language known as sign language. Sign language is a non-verbal form of intercourse which is found amongst deaf communities in world. The languages do not have a common origin and hence difficult to interpret. The project aims to facilitate people by means of a glove based communication interpreter system. The glove is internally equipped with five flex sensors. For each specific gesture, the flex sensor produces a proportional change in resistance. The output from the sensor is analog values it is converted to digital. The processing of these hand gestures is in Arduino Duemilanove Board which is an advance version of the microcontroller. It compares the input signal with predefined voltage levels stored in memory. According to that required output displays on the LCD in the form of text & sound is produced which is stored is memory with the help of speaker. In such a way it is easy for deaf and dumb to communicate with normal people. This system can also be use for the woman security since we are sending a message to authority with the help of smart phone.",
"title": ""
},
{
"docid": "neg:1840439_9",
"text": "This paper presents an innovative broadband millimeter-wave single balanced diode mixer that makes use of a substrate integrated waveguide (SIW)-based 180 hybrid. It has low conversion loss of less than 10 dB, excellent linearity, and high port-to-port isolations over a wide frequency range of 20 to 26 GHz. The proposed mixer has advantages over previously reported millimeter-wave mixer structures judging from a series of aspects such as cost, ease of fabrication, planar construction, and broadband performance. Furthermore, a receiver front-end that integrates a high-performance SIW slot-array antenna and our proposed mixer is introduced. Based on our proposed receiver front-end structure, a K-band wireless communication system with M-ary quadrature amplitude modulation is developed and demonstrated for line-of-sight channels. Excellent overall error vector magnitude performance has been obtained.",
"title": ""
},
{
"docid": "neg:1840439_10",
"text": "OBJECTIVE\nTo determine the frequency of early relapse after achieving good initial correction in children who were on clubfoot abduction brace.\n\n\nMETHODS\nThe cross-sectional study was conducted at the Jinnah Postgraduate Medical Centre, Karachi, and included parents of children of either gender in the age range of 6 months to 3years with idiopathic clubfoot deformities who had undergone Ponseti treatment between September 2012 and June 2013, and who were on maintenance brace when the data was collected from December 2013 to March 2014. Parents of patients with follow-up duration in brace less than six months and those with syndromic clubfoot deformity were excluded. The interviews were taken through a purposive designed questionnaire. SPSS 16 was used for data analysis.\n\n\nRESULTS\nThe study included parents of 120 patients. Of them, 95(79.2%) behaved with good compliance on Denis Browne Splint, 10(8.3%) were fair and 15(12.5%)showed poor compliance. Major reason for poor and non-compliance was unaffordability of time and cost for regular follow-up. Besides, 20(16.67%) had inconsistent use due to delay inre-procurement of Foot Abduction Braceonce the child had outgrown the shoe. Only 4(3.33%) talked of cultural barriers and conflict of interest between the parents. Early relapse was observed in 23(19.16%) patients and 6(5%) of them responded to additional treatment and were put back on brace treatment; 13(10.83%) had minor relapse with forefoot varus, without functional disability, and the remaining 4(3.33%) had major relapse requiring extensive surgery. Overall success was recorded in 116(96.67%) cases.\n\n\nCONCLUSIONS\nThe positioning of shoes on abduction brace bar, comfort in shoes, affordability, initial and subsequent delay in procurement of new shoes once the child's feet overgrew the shoe, were the four containable factors on the part of Ponseti practitioner.",
"title": ""
},
{
"docid": "neg:1840439_11",
"text": "Protection against high voltage-standing-wave-ratios (VSWR) is of great importance in many power amplifier applications. Despite excellent thermal and voltage breakdown properties even gallium nitride devices may need such measures. This work focuses on the timing aspect when using barium-strontium-titanate (BST) varactors to limit power dissipation and gate current. A power amplifier was designed and fabricated, implementing a varactor and a GaN-based voltage switch as varactor modulator for VSWR protection. The response time until the protection is effective was measured by switching the voltages at varactor, gate and drain of the transistor, respectively. It was found that it takes a minimum of 50 μs for the power amplifier to reach a safe condition. Pure gate pinch-off or drain voltage reduction solutions were slower and bias-network dependent. For a thick-film BST MIM varactor, optimized for speed and power, a switching time of 160 ns was achieved.",
"title": ""
},
{
"docid": "neg:1840439_12",
"text": "Highly Autonomous Driving (HAD) systems rely on deep neural networks for the visual perception of the driving environment. Such networks are train on large manually annotated databases. In this work, a semi-parametric approach to one-shot learning is proposed, with the aim of bypassing the manual annotation step required for training perceptions systems used in autonomous driving. The proposed generative framework, coined Generative One-Shot Learning (GOL), takes as input single one-shot objects, or generic patterns, and a small set of so-called regularization samples used to drive the generative process. New synthetic data is generated as Pareto optimal solutions from one-shot objects using a set of generalization functions built into a generalization generator. GOL has been evaluated on environment perception challenges encountered in autonomous vision.",
"title": ""
},
{
"docid": "neg:1840439_13",
"text": "Conventional automatic speech recognition (ASR) based on a hidden Markov model (HMM)/deep neural network (DNN) is a very complicated system consisting of various modules such as acoustic, lexicon, and language models. It also requires linguistic resources, such as a pronunciation dictionary, tokenization, and phonetic context-dependency trees. On the other hand, end-to-end ASR has become a popular alternative to greatly simplify the model-building process of conventional ASR systems by representing complicated modules with a single deep network architecture, and by replacing the use of linguistic resources with a data-driven learning method. There are two major types of end-to-end architectures for ASR; attention-based methods use an attention mechanism to perform alignment between acoustic frames and recognized symbols, and connectionist temporal classification (CTC) uses Markov assumptions to efficiently solve sequential problems by dynamic programming. This paper proposes hybrid CTC/attention end-to-end ASR, which effectively utilizes the advantages of both architectures in training and decoding. During training, we employ the multiobjective learning framework to improve robustness and achieve fast convergence. During decoding, we perform joint decoding by combining both attention-based and CTC scores in a one-pass beam search algorithm to further eliminate irregular alignments. Experiments with English (WSJ and CHiME-4) tasks demonstrate the effectiveness of the proposed multiobjective learning over both the CTC and attention-based encoder–decoder baselines. Moreover, the proposed method is applied to two large-scale ASR benchmarks (spontaneous Japanese and Mandarin Chinese), and exhibits performance that is comparable to conventional DNN/HMM ASR systems based on the advantages of both multiobjective learning and joint decoding without linguistic resources.",
"title": ""
},
{
"docid": "neg:1840439_14",
"text": "A comparison between corporate fed microstrip antenna array (MSAA) and an electromagnetically coupled microstrip antenna array (EMCP-MSAA) at Ka-band is presented. A low loss feed network is proposed based on the analysis of different line widths used in the feed network. Gain improvement of 25% (1.5 dB) is achieved using the proposed feed network in 2×2 EMCP-MSAA. A 8×8 MSAA has been designed and fabricated at Ka-band. The measured bandwidth is 4.3% with gain of 24dB. Bandwidth enhancement is done by designing and fabricating EMCP-MSAA to give bandwidth of 17% for 8×8 array.",
"title": ""
},
{
"docid": "neg:1840439_15",
"text": "A future Internet of Things (IoT) system will connect the physical world into cyberspace everywhere and everything via billions of smart objects. On the one hand, IoT devices are physically connected via communication networks. The service oriented architecture (SOA) can provide interoperability among heterogeneous IoT devices in physical networks. On the other hand, IoT devices are virtually connected via social networks. In this paper we propose adaptive and scalable trust management to support service composition applications in SOA-based IoT systems. We develop a technique based on distributed collaborative filtering to select feedback using similarity rating of friendship, social contact, and community of interest relationships as the filter. Further we develop a novel adaptive filtering technique to determine the best way to combine direct trust and indirect trust dynamically to minimize convergence time and trust estimation bias in the presence of malicious nodes performing opportunistic service and collusion attacks. For scalability, we consider a design by which a capacity-limited node only keeps trust information of a subset of nodes of interest and performs minimum computation to update trust. We demonstrate the effectiveness of our proposed trust management through service composition application scenarios with a comparative performance analysis against EigenTrust and PeerTrust.",
"title": ""
},
{
"docid": "neg:1840439_16",
"text": "Introduction Reflexivity is a curious term with various meanings. Finding a definition of reflexivity that demonstrates what it means and how it is achieved is difficult (Colbourne and Sque 2004). Moreover, writings on reflexivity have not been transparent in terms of the difficulties, practicalities and methods of the process (Mauthner and Doucet 2003). Nevertheless, it is argued that an attempt be made to gain ‘some kind of intellectual handle’ on reflexivity in order to make use of it as a guiding standard (Freshwater and Rolfe 2001). The role of reflexivity in the many and varied qualitative methodologies is significant. It is therefore a concept of particular relevance to nursing as qualitative methodologies play a principal function in nursing enquiry. Reflexivity assumes a pivotal role in feminist research (King 1994). It is also paramount in participatory action research (Robertson 2000), ethnographies, and hermeneutic and post-structural approaches (Koch and Harrington 1998). Furthermore, it plays an integral part in medical case study research reflexivity epistemological critical feminist ▲ ▲ ▲ ▲ k e y w o rd s",
"title": ""
},
{
"docid": "neg:1840439_17",
"text": "Music recommendation systems are well explored and commonly used but are normally based on manually tagged parameters and simple similarity calculation. Our project proposes a recommendation system based on emotional computing, automatic classification and feature extraction, which recommends music based on the emotion expressed by the song.\n To achieve this goal a set of features is extracted from the song, including the MFCC (mel-frequency cepstral coefficients) following the works of McKinney et al. [6] and a machine learning system is trained on a set of 424 songs, which are categorized by emotion. The categorization of the song is performed manually by multiple persons to avoid error. The emotional categorization is performed using a modified version of the Tellegen-Watson-Clark emotion model [7], as proposed by Trohidis et al. [8]. The System is intended as desktop application that can reliably determine similarities between the main emotion in multiple pieces of music, allowing the user to choose music by emotion. We report our findings below.",
"title": ""
},
{
"docid": "neg:1840439_18",
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"title": ""
},
{
"docid": "neg:1840439_19",
"text": "Although acoustic waves are the most versatile and widely used physical layer technology for underwater wireless communication networks (UWCNs), they are adversely affected by ambient noise, multipath propagation, and fading. The large propagation delays, low bandwidth, and high bit error rates of the underwater acoustic channel hinder communication as well. These operational limits call for complementary technologies or communication alternatives when the acoustic channel is severely degraded. Magnetic induction (MI) is a promising technique for UWCNs that is not affected by large propagation delays, multipath propagation, and fading. In this paper, the MI communication channel has been modeled. Its propagation characteristics have been compared to the electromagnetic and acoustic communication systems through theoretical analysis and numerical evaluations. The results prove the feasibility of MI communication in underwater environments. The MI waveguide technique is developed to reduce path loss. The communication range between source and destination is considerably extended to hundreds of meters in fresh water due to its superior bit error rate performance.",
"title": ""
}
] |
1840440 | Linked Data Indexing of Distributed Ledgers | [
{
"docid": "pos:1840440_0",
"text": "The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.",
"title": ""
},
{
"docid": "pos:1840440_1",
"text": "The Web 2.0 wave brings, among other aspects, the Programmable Web:increasing numbers of Web sites provide machine-oriented APIs and Web services. However, most APIs are only described with text in HTML documents. The lack of machine-readable API descriptions affects the feasibility of tool support for developers who use these services. We propose a microformat called hRESTS (HTML for RESTful Services) for machine-readable descriptions of Web APIs, backed by a simple service model. The hRESTS microformat describes main aspects of services, such as operations, inputs and outputs. We also present two extensions of hRESTS:SA-REST, which captures the facets of public APIs important for mashup developers, and MicroWSMO, which provides support for semantic automation.",
"title": ""
}
] | [
{
"docid": "neg:1840440_0",
"text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.",
"title": ""
},
{
"docid": "neg:1840440_1",
"text": "In this paper partial discharges (PD) in disc-shaped cavities in polycarbonate are measured at variable frequency (0.01-100 Hz) of the applied voltage. The advantage of PD measurements at variable frequency is that more information about the insulation system may be extracted than from traditional PD measurements at a single frequency (usually 50/60 Hz). The PD activity in the cavity is seen to depend on the applied frequency. Moreover, the PD frequency dependence changes with the applied voltage amplitude, the cavity diameter, and the cavity location (insulated or electrode bounded). It is suggested that the PD frequency dependence is governed by the statistical time lag of PD and the surface charge decay in the cavity. This is the first of two papers addressing the frequency dependence of PD in a cavity. In the second paper a physical model of PD in a cavity at variable applied frequency is presented.",
"title": ""
},
{
"docid": "neg:1840440_2",
"text": "We propose a new approach to localizing handle-like grasp affordances in 3-D point clouds. The main idea is to identify a set of sufficient geometric conditions for the existence of a grasp affordance and to search the point cloud for neighborhoods that satisfy these conditions. Our goal is not to find all possible grasp affordances, but instead to develop a method of localizing important types of grasp affordances quickly and reliably. The strength of this method relative to other current approaches is that it is very practical: it can have good precision/recall for the types of affordances under consideration, it runs in real-time, and it is easy to adapt to different robots and operating scenarios. We validate with a set of experiments where the approach is used to enable the Rethink Baxter robot to localize and grasp unmodelled objects.",
"title": ""
},
{
"docid": "neg:1840440_3",
"text": "We study interactive situations in which players are boundedly rational. Each player, rather than optimizing given a belief about the other players' behavior, as in the theory of Nash equilibrium, uses the following choice procedure. She rst associates one consequence with each of her actions by sampling (literally or virtually) each of her actions once. Then she chooses the action that has the best consequence. We deene a notion of equilibrium for such situations and study its properties. (JEL C72) Economists' interest in game theory was prompted by dissatisfaction with the assumption underlying the notion of competitive equilibrium that each economic agent ignores other agents' actions when making choices. Game theory analyzes the interaction of agents who \\think strategically\", making their decisions rationally after forming beliefs about their opponents' moves, beliefs that are based on an analysis of the opponents' interests.",
"title": ""
},
{
"docid": "neg:1840440_4",
"text": "Compressed sensing is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for reconstruction. In this paper we introduce a new theory for distributed compressed sensing (DCS) that enables new distributed coding algorithms for multi-signal ensembles that exploit both intraand inter-signal correlation structures. The DCS theory rests on a new concept that we term the joint sparsity of a signal ensemble. We study in detail three simple models for jointly sparse signals, propose algorithms for joint recovery of multiple signals from incoherent projections, and characterize theoretically and empirically the number of measurements per sensor required for accurate reconstruction. We establish a parallel with the Slepian-Wolf theorem from information theory and establish upper and lower bounds on the measurement rates required for encoding jointly sparse signals. In two of our three models, the results are asymptotically best-possible, meaning that both the upper and lower bounds match the performance of our practical algorithms. Moreover, simulations indicate that the asymptotics take effect with just a moderate number of signals. In some sense DCS is a framework for distributed compression of sources with memory, which has remained a challenging problem for some time. DCS is immediately applicable to a range of problems in sensor networks and arrays.",
"title": ""
},
{
"docid": "neg:1840440_5",
"text": "MOTIVATION\nAn EXCEL template has been developed for the calculation of enzyme kinetic parameters by non-linear regression techniques. The tool is accurate, inexpensive, as well as easy to use and modify.\n\n\nAVAILABILITY\nThe program is available from http://www.ebi.ac.uk/biocat/biocat.html\n\n\nCONTACT\nagustin. hernandez@bio.kuleuven.ac.be",
"title": ""
},
{
"docid": "neg:1840440_6",
"text": "The move to Internet news publishing is the latest in a series of technological shifts which have required journalists not merely to adapt their daily practice but which have also at least in the view of some – recast their role in society. For over a decade, proponents of the networked society as a new way of life have argued that responsibility for news selection and production will shift from publishers, editors and reporters to individual consumers, as in the scenario offered by Nicholas Negroponte:",
"title": ""
},
{
"docid": "neg:1840440_7",
"text": "Fast and accurate localization of software defects continues to be a difficult problem since defects can emanate from a large variety of sources and can often be intricate in nature. In this paper, we show how version histories of a software project can be used to estimate a prior probability distribution for defect proneness associated with the files in a given version of the project. Subsequently, these priors are used in an IR (Information Retrieval) framework to determine the posterior probability of a file being the cause of a bug. We first present two models to estimate the priors, one from the defect histories and the other from the modification histories, with both types of histories as stored in the versioning tools. Referring to these as the base models, we then extend them by incorporating a temporal decay into the estimation of the priors. We show that by just including the base models, the mean average precision (MAP) for bug localization improves by as much as 30%. And when we also factor in the time decay in the estimates of the priors, the improvements in MAP can be as large as 80%.",
"title": ""
},
{
"docid": "neg:1840440_8",
"text": "This paper proposes a new active interphase transformer for 24-pulse diode rectifier. The proposed scheme injects a compensation current into the secondary winding of either of the two first-stage interphase transformers. For only one of the first-stage interphase transformers being active, the inverter conducted the injecting current is with a lower kVA rating [1.26% pu (Po)] compared to conventional active interphase transformers. Moreover, the proposal scheme draws near sinusoidal input currents and the simulated and the experimental total harmonic distortion of overall line currents are only 1.88% and 2.27% respectively. When the inverter malfunctions, the input line current still can keep in the conventional 24-pulse situation. A digital-signal-processor (DSP) based digital controller is employed to calculate the desired compensation current and deals with the trigger signals needed for the inverter. Moreover, a 6kW prototype is built for test. Both simulation and experimental results demonstrate the validity of the proposed scheme.",
"title": ""
},
{
"docid": "neg:1840440_9",
"text": "This paper discussed a fast dynamic braking method of three phase induction motor. This braking method consists of two conventional braking methods i.e. direct current injection braking and capacitor self excitation braking. Those mathods were arranged in a such grading time to become a multistage dynamic braking. Simulation was done using MATLAB/Simulink software for design and predicting the behaviour. The results showed that the propossed method gave faster braking than the other two methods carried out separately.",
"title": ""
},
{
"docid": "neg:1840440_10",
"text": "Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters.\n In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.",
"title": ""
},
{
"docid": "neg:1840440_11",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "neg:1840440_12",
"text": "In this paper, the impact of an increased number of layers on the performance of axial flux permanent magnet synchronous machines (AFPMSMs) is studied. The studied parameters are the inductance, terminal voltages, PM losses, iron losses, the mean value of torque, and the ripple torque. It is shown that increasing the number of layers reduces the fundamental winding factor. In consequence, the rated torque for the same current reduces. However, the reduction of harmonics associated with a higher number of layers reduces the ripple torque, PM losses, and iron losses. Besides studying the performance of the AFPMSMs for the rated conditions, the study is broadened for the field weakening (FW) region. During the FW region, the flux of the PMs is weakened by an injection of a reversible d-axis current. This keeps the terminal voltage of the machine fixed at the rated value. The inductance plays an important role in the FW study. A complete study for the FW shows that the two layer winding has the optimum performance compared to machines with an other number of winding layers.",
"title": ""
},
{
"docid": "neg:1840440_13",
"text": "In present-day high-performance electronic components, the generated heat loads result in unacceptably high junction temperatures and reduced component lifetimes. Thermoelectric modules can, in principle, enhance heat removal and reduce the temperatures of such electronic devices. However, state-of-the-art bulk thermoelectric modules have a maximum cooling flux qmax of only about 10 W cm(-2), while state-of-the art commercial thin-film modules have a qmax <100 W cm(-2). Such flux values are insufficient for thermal management of modern high-power devices. Here we show that cooling fluxes of 258 W cm(-2) can be achieved in thin-film Bi2Te3-based superlattice thermoelectric modules. These devices utilize a p-type Sb2Te3/Bi2Te3 superlattice and n-type δ-doped Bi2Te3-xSex, both of which are grown heteroepitaxially using metalorganic chemical vapour deposition. We anticipate that the demonstration of these high-cooling-flux modules will have far-reaching impacts in diverse applications, such as advanced computer processors, radio-frequency power devices, quantum cascade lasers and DNA micro-arrays.",
"title": ""
},
{
"docid": "neg:1840440_14",
"text": "In this paper, we present a feature based approach for monocular scene reconstruction based on extended Kalman filters (EKF). Our method processes a sequence of images taken by a single camera mounted frontal on a mobile robot. Using different techniques, we are able to produce a precise reconstruction that is free from outliers and therefore can be used for reliable obstacle detection. In real-world field-tests we show that the presented approach is able to detect obstacles that are not seen by other sensors, such as laser-range-finder s. Furthermore, we show that visual obstacle detection combined with a laser-range-finder can increase the detection rate of obstacles considerably allowing the autonomous use of mobile robots in complex public environments.",
"title": ""
},
{
"docid": "neg:1840440_15",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
},
{
"docid": "neg:1840440_16",
"text": "Our study examined the determinants of ERP knowledge transfer from implementation consultants (ICs) to key users (KUs), and vice versa. An integrated model was developed, positing that knowledge transfer was influenced by the knowledge-, source-, recipient-, and transfer context-related aspects. Data to test this model were collected from 85 ERP-implementation projects of firms that were mainly located in Zhejiang province, China. The results of the analysis demonstrated that all four aspects had a significant influence on ERP knowledge transfer. Furthermore, the results revealed the mediator role of the transfer activities and arduous relationship between ICs and KUs. The influence on knowledge transfer from the source’s willingness to transfer and the recipient’s willingness to accept knowledge was fully mediated by transfer activities, whereas the influence on knowledge transfer from the recipient’s ability to absorb knowledge was only partially mediated by transfer activities. The influence on knowledge transfer from the communication capability (including encoding and decoding competence) was fully mediated by arduous relationship. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840440_17",
"text": "Algorithmic image-based diagnosis and prognosis of neurodegenerative diseases on longitudinal data has drawn great interest from computer vision researchers. The current state-of-the-art models for many image classification tasks are based on the Convolutional Neural Networks (CNN). However, a key challenge in applying CNN to biological problems is that the available labeled training samples are very limited. Another issue for CNN to be applied in computer aided diagnosis applications is that to achieve better diagnosis and prognosis accuracy, one usually has to deal with the longitudinal dataset, i.e., the dataset of images scanned at different time points. Here we argue that an enhanced CNN model with transfer learning for the joint analysis of tasks from multiple time points or regions of interests may have a potential to improve the accuracy of computer aided diagnosis. To reach this goal, we innovate a CNN based deep learning multi-task dictionary learning framework to address the above challenges. Firstly, we pretrain CNN on the ImageNet dataset and transfer the knowledge from the pre-trained model to the medical imaging progression representation, generating the features for different tasks. Then, we propose a novel unsupervised learning method, termed Multi-task Stochastic Coordinate Coding (MSCC), for learning different tasks by using shared and individual dictionaries and generating the sparse features required to predict the future cognitive clinical scores. We apply our new model in a publicly available neuroimaging cohort to predict clinical measures with two different feature sets and compare them with seven other state-of-theart methods. The experimental results show our proposed method achieved superior results.",
"title": ""
},
{
"docid": "neg:1840440_18",
"text": "This paper proposes a new finger-vein recognition system that uses a binary robust invariant elementary feature from accelerated segment test feature points and an adaptive thresholding strategy. Subsequently, the proposed a multi-image quality assessments (MQA) are applied to conduct a second stage verification. As oppose to other studies, the region of interest is directly identified using a range of normalized feature point area, which reduces the complexity of pre-processing. This recognition structure allows an efficient feature points matching using a robust feature and rigorous verification using the MQA process. As a result, this method not only reduces the system computation time, comparisons against former relevant studies demonstrate the superiority of the proposed method.",
"title": ""
},
{
"docid": "neg:1840440_19",
"text": "Stock market decision making is a very challenging and difficult task of financial data prediction. Prediction about stock market with high accuracy movement yield profit for investors of the stocks. Because of the complexity of stock market financial data, development of efficient models for prediction decision is very difficult, and it must be accurate. This study attempted to develop models for prediction of the stock market and to decide whether to buy/hold the stock using data mining and machine learning techniques. The classification techniques used in these models are naive bayes and random forest classification. Technical indicators are calculated from the stock prices based on time-line data and it is used as inputs of the proposed prediction models. 10 years of stock market data has been used for prediction. Based on the data set, these models are capable to generate buy/hold signal for stock market as a output. The main goal of this paper is to generate decision as per user’s requirement like amount to be invested, time duration for investment, minimum profit, maximum loss using machine learning and data analysis techniques.",
"title": ""
}
] |
1840441 | Actuator design for high force proprioceptive control in fast legged locomotion | [
{
"docid": "pos:1840441_0",
"text": "This paper shows mechanisms for artificial finger based on a planetary gear system (PGS). Using the PGS as a transmitter provides an under-actuated system for driving three joints of a finger with back-drivability that is crucial characteristics for fingers as an end-effector when it interacts with external environment. This paper also shows the artificial finger employed with the originally developed mechanism called “double planetary gear system” (DPGS). The DPGS provides not only back-drivable and under-actuated flexion-extension of the three joints of a finger, which is identical to the former, but also adduction-abduction of the MP joint. Both of the above finger mechanisms are inherently safe due to being back-drivable with no electric device or sensor in the finger part. They are also rigorously solvable in kinematics and kinetics as shown in this paper.",
"title": ""
},
{
"docid": "pos:1840441_1",
"text": "Abstrud In the last two years a third generation of torque-controlled light weight robots has been developed in DLR‘s robotics and mechatronics lab which is based on all the experiences that have been made with the first two generations. It aims at reaching the limits of what seems achievable with present day technologies not only with respect to light-weight, but also with respect to minimal power consumption and losses. One of the main gaps we tried to close in version III was the development of a new, robot-dedicated high energy motor designed with the best available techniques of concurrent engineering, and the renewed efforts to save weight in the links by using ultralight carbon fibres.",
"title": ""
}
] | [
{
"docid": "neg:1840441_0",
"text": "Predicting the final state of a running process, the remaining time to completion or the next activity of a running process are important aspects of runtime process management. Runtime management requires the ability to identify processes that are at risk of not meeting certain criteria in order to offer case managers decision information for timely intervention. This in turn requires accurate prediction models for process outcomes and for the next process event, based on runtime information available at the prediction and decision point. In this paper, we describe an initial application of deep learning with recurrent neural networks to the problem of predicting the next process event. This is both a novel method in process prediction, which has previously relied on explicit process models in the form of Hidden Markov Models (HMM) or annotated transition systems, and also a novel application for deep learning methods.",
"title": ""
},
{
"docid": "neg:1840441_1",
"text": "Although three-phase permanent magnet (PM) motors are quite common in industry, multi-phase PM motors are used in special applications where high power and redundancy are required. Multi-phase PM motors offer higher torque/power density than conventional three-phase PM motors. In this paper, a novel multi-phase consequent pole PM (CPPM) synchronous motor is proposed. The constant power–speed range of the proposed motor is quite wide as opposed to conventional PM motors. The design and the detailed finite-element analysis of the proposed nine-phase CPPM motor and performance comparison with a nine-phase surface mounted PM motor are completed to illustrate the benefits of the proposed motor.",
"title": ""
},
{
"docid": "neg:1840441_2",
"text": "This paper examines the connection between the legal environment and financial development, and then traces this link through to long-run economic growth. Countries with legal and regulatory systems that (1) give a high priority to creditors receiving the full present value of their claims on corporations, (2) enforce contracts effectively, and (3) promote comprehensive and accurate financial reporting by corporations have better-developed financial intermediaries. The data also indicate that the exogenous component of financial intermediary development – the component of financial intermediary development defined by the legal and regulatory environment – is positively associated with economic growth. * Department of Economics, 114 Rouss Hall, University of Virginia, Charlottesville, VA 22903-3288; RL9J@virginia.edu. I thank Thorsten Beck, Maria Carkovic, Bill Easterly, Lant Pritchett, Andrei Shleifer, and seminar participants at the Board of Governors of the Federal Reserve System, the University of Virginia, and the World Bank for helpful comments.",
"title": ""
},
{
"docid": "neg:1840441_3",
"text": "Reducing redundancy in data representation leads to decreased data storage requirements and lower costs for data communication.",
"title": ""
},
{
"docid": "neg:1840441_4",
"text": "To manage supply chain efficiently, e-business organizations need to understand their sales effectively. Previous research has shown that product review plays an important role in influencing sales performance, especially review volume and rating. However, limited attention has been paid to understand how other factors moderate the effect of product review on online sales. This study aims to confirm the importance of review volume and rating on improving sales performance, and further examine the moderating roles of product category, answered questions, discount and review usefulness in such relationships. By analyzing 2939 records of data extracted from Amazon.com using a big data architecture, it is found that review volume and rating have stronger influence on sales rank for search product than for experience product. Also, review usefulness significantly moderates the effects of review volume and rating on product sales rank. In addition, the relationship between review volume and sales rank is significantly moderated by both answered questions and discount. However, answered questions and discount do not have significant moderation effect on the relationship between review rating and sales rank. The findings expand previous literature by confirming important interactions between customer review features and other factors, and the findings provide practical guidelines to manage e-businesses. This study also explains a big data architecture and illustrates the use of big data technologies in testing theoretical",
"title": ""
},
{
"docid": "neg:1840441_5",
"text": "Gesture recognition has emerged recently as a promising application in our daily lives. Owing to low cost, prevalent availability, and structural simplicity, RFID shall become a popular technology for gesture recognition. However, the performance of existing RFID-based gesture recognition systems is constrained by unfavorable intrusiveness to users, requiring users to attach tags on their bodies. To overcome this, we propose GRfid, a novel device-free gesture recognition system based on phase information output by COTS RFID devices. Our work stems from the key insight that the RFID phase information is capable of capturing the spatial features of various gestures with low-cost commodity hardware. In GRfid, after data are collected by hardware, we process the data by a sequence of functional blocks, namely data preprocessing, gesture detection, profiles training, and gesture recognition, all of which are well-designed to achieve high performance in gesture recognition. We have implemented GRfid with a commercial RFID reader and multiple tags, and conducted extensive experiments in different scenarios to evaluate its performance. The results demonstrate that GRfid can achieve an average recognition accuracy of <inline-formula> <tex-math notation=\"LaTeX\">$96.5$</tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq1-2549518.gif\"/> </alternatives></inline-formula> and <inline-formula><tex-math notation=\"LaTeX\">$92.8$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2549518.gif\"/></alternatives></inline-formula> percent in the identical-position and diverse-positions scenario, respectively. Moreover, experiment results show that GRfid is robust against environmental interference and tag orientations.",
"title": ""
},
{
"docid": "neg:1840441_6",
"text": "This paper describes a student project examining mechanisms with which to attack Bluetooth-enabled devices. The paper briefly describes the protocol architecture of Bluetooth and the Java interface that programmers can use to connect to Bluetooth communication services. Several types of attacks are described, along with a detailed example of two attack tools, Bloover II and BT Info.",
"title": ""
},
{
"docid": "neg:1840441_7",
"text": "Skip-Gram Negative Sampling (SGNS) word embedding model, well known by its implementation in “word2vec” software, is usually optimized by stochastic gradient descent. It can be shown that optimizing for SGNS objective can be viewed as an optimization problem of searching for a good matrix with the low-rank constraint. The most standard way to solve this type of problems is to apply Riemannian optimization framework to optimize the SGNS objective over the manifold of required low-rank matrices. In this paper, we propose an algorithm that optimizes SGNS objective using Riemannian optimization and demonstrates its superiority over popular competitors, such as the original method to train SGNS and SVD over SPPMI matrix.",
"title": ""
},
{
"docid": "neg:1840441_8",
"text": "A 500W classical three-way Doherty power amplifier (DPA) with LDMOS devices at 1.8GHz is presented. Optimized device ratio is selected to achieve maximum efficiency as well as linearity. With a simple passive input driving network implementation, the demonstrator exhibits more than 55% efficiency with 9.9PAR WCDMA signal from 1805MHz-1880MHz. It can be linearized at -60dBc level with 20MHz LTE signal at an average output power of 49dBm.",
"title": ""
},
{
"docid": "neg:1840441_9",
"text": "The research challenge addressed in this paper is to devise effective techniques for identifying task-based sessions, i.e. sets of possibly non contiguous queries issued by the user of a Web Search Engine for carrying out a given task. In order to evaluate and compare different approaches, we built, by means of a manual labeling process, a ground-truth where the queries of a given query log have been grouped in tasks. Our analysis of this ground-truth shows that users tend to perform more than one task at the same time, since about 75% of the submitted queries involve a multi-tasking activity. We formally define the Task-based Session Discovery Problem (TSDP) as the problem of best approximating the manually annotated tasks, and we propose several variants of well known clustering algorithms, as well as a novel efficient heuristic algorithm, specifically tuned for solving the TSDP. These algorithms also exploit the collaborative knowledge collected by Wiktionary and Wikipedia for detecting query pairs that are not similar from a lexical content point of view, but actually semantically related. The proposed algorithms have been evaluated on the above ground-truth, and are shown to perform better than state-of-the-art approaches, because they effectively take into account the multi-tasking behavior of users.",
"title": ""
},
{
"docid": "neg:1840441_10",
"text": "DoS attacks on sensor measurements used for industrial control can cause the controller of the process to use stale data. If the DoS attack is not timed properly, the use of stale data by the controller will have limited impact on the process; however, if the attacker is able to launch the DoS attack at the correct time, the use of stale data can cause the controller to drive the system to an unsafe state.\n Understanding the timing parameters of the physical processes does not only allow an attacker to construct a successful attack but also to maximize its impact (damage to the system). In this paper we use Tennessee Eastman challenge process to study an attacker that has to identify (in realtime) the optimal timing to launch a DoS attack. The choice of time to begin an attack is forward-looking, requiring the attacker to consider each opportunity against the possibility of a better opportunity in the future, and this lends itself to the theory of optimal stopping problems. In particular we study the applicability of the Best Choice Problem (also known as the Secretary Problem), quickest change detection, and statistical process outliers. Our analysis can be used to identify specific sensor measurements that need to be protected, and the time that security or safety teams required to respond to attacks, before they cause major damage.",
"title": ""
},
{
"docid": "neg:1840441_11",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
},
{
"docid": "neg:1840441_12",
"text": "Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments.",
"title": ""
},
{
"docid": "neg:1840441_13",
"text": "Anomaly detection involves identifying the events which do not conform to an expected pattern in data. A common approach to anomaly detection is to identify outliers in a latent space learned from data. For instance, PCA has been successfully used for anomaly detection. Variational autoencoder (VAE) is a recently-developed deep generative model which has established itself as a powerful method for learning representation from data in a nonlinear way. However, the VAE does not take the temporal dependence in data into account, so it limits its applicability to time series. In this paper we combine the echo-state network, which is a simple training method for recurrent networks, with the VAE, in order to learn representation from multivariate time series data. We present an echo-state conditional variational autoencoder (ES-CVAE) and demonstrate its useful behavior in the task of anomaly detection in multivariate time series data.",
"title": ""
},
{
"docid": "neg:1840441_14",
"text": "A country's growth is strongly measured by quality of its education system. Education sector, across the globe has witnessed sea change in its functioning. Today it is recognized as an industry and like any other industry it is facing challenges, the major challenges of higher education being decrease in students' success rate and their leaving a course without completion. An early prediction of students' failure may help the management provide timely counseling as well coaching to increase success rate and student retention. We use different classification techniques to build performance prediction model based on students' social integration, academic integration, and various emotional skills which have not been considered so far. Two algorithms J48 (Implementation of C4.5) and Random Tree have been applied to the records of MCA students of colleges affiliated to Guru Gobind Singh Indraprastha University to predict third semester performance. Random Tree is found to be more accurate in predicting performance than J48 algorithm.",
"title": ""
},
{
"docid": "neg:1840441_15",
"text": "Big sensing data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity. Cloud computing provides a promising platform for big sensing data processing and storage as it provides a flexible stack of massive computing, storage, and software services in a scalable manner. Current big sensing data processing on Cloud have adopted some data compression techniques. However, due to the high volume and velocity of big sensing data, traditional data compression techniques lack sufficient efficiency and scalability for data processing. Based on specific on-Cloud data compression requirements, we propose a novel scalable data compression approach based on calculating similarity among the partitioned data chunks. Instead of compressing basic data units, the compression will be conducted over partitioned data chunks. To restore original data sets, some restoration functions and predictions will be designed. MapReduce is used for algorithm implementation to achieve extra scalability on Cloud. With real world meteorological big sensing data experiments on U-Cloud platform, we demonstrate that the proposed scalable compression approach based on data chunk similarity can significantly improve data compression efficiency with affordable data accuracy loss.",
"title": ""
},
{
"docid": "neg:1840441_16",
"text": "Understanding how ecological conditions influence physiological responses is fundamental to forensic entomology. When determining the minimum postmortem interval with blow fly evidence in forensic investigations, using a reliable and accurate model of development is integral. Many published studies vary in results, source populations, and experimental designs. Accordingly, disentangling genetic causes of developmental variation from environmental causes is difficult. This study determined the minimum time of development and pupal sizes of three populations of Lucilia sericata Meigen (Diptera: Calliphoridae; from California, Michigan, and West Virginia) at two temperatures (20 degrees C and 33.5 degrees C). Development times differed significantly between strain and temperature. In addition, California pupae were the largest and fastest developing at 20 degrees C, but at 33.5 degrees C, though they still maintained their rank in size among the three populations, they were the slowest to develop. These results indicate a need to account for genetic differences in development, and genetic variation in environmental responses, when estimating a postmortem interval with entomological data.",
"title": ""
},
{
"docid": "neg:1840441_17",
"text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.",
"title": ""
},
{
"docid": "neg:1840441_18",
"text": "The discovery of stem cells that can generate neural tissue has raised new possibilities for repairing the nervous system. A rush of papers proclaiming adult stem cell plasticity has fostered the notion that there is essentially one stem cell type that, with the right impetus, can create whatever progeny our heart, liver or other vital organ desires. But studies aimed at understanding the role of stem cells during development have led to a different view — that stem cells are restricted regionally and temporally, and thus not all stem cells are equivalent. Can these views be reconciled?",
"title": ""
},
{
"docid": "neg:1840441_19",
"text": "Queries in patent prior art search are full patent applications and much longer than standard ad hoc search and web search topics. Standard information retrieval (IR) techniques are not entirely effective for patent prior art search because of ambiguous terms in these massive queries. Reducing patent queries by extracting key terms has been shown to be ineffective mainly because it is not clear what the focus of the query is. An optimal query reduction algorithm must thus seek to retain the useful terms for retrieval favouring recall of relevant patents, but remove terms which impair IR effectiveness. We propose a new query reduction technique decomposing a patent application into constituent text segments and computing the Language Modeling (LM) similarities by calculating the probability of generating each segment from the top ranked documents. We reduce a patent query by removing the least similar segments from the query, hypothesising that removal of these segments can increase the precision of retrieval, while still retaining the useful context to achieve high recall. Experiments on the patent prior art search collection CLEF-IP 2010 show that the proposed method outperforms standard pseudo-relevance feedback (PRF) and a naive method of query reduction based on removal of unit frequency terms (UFTs).",
"title": ""
}
] |
1840442 | Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification | [
{
"docid": "pos:1840442_0",
"text": "The field of surveillance and forensics research is currently shifting focus and is now showing an ever increasing interest in the task of people reidentification. This is the task of assigning the same identifier to all instances of a particular individual captured in a series of images or videos, even after the occurrence of significant gaps over time or space. People reidentification can be a useful tool for people analysis in security as a data association method for long-term tracking in surveillance. However, current identification techniques being utilized present many difficulties and shortcomings. For instance, they rely solely on the exploitation of visual cues such as color, texture, and the object’s shape. Despite the many advances in this field, reidentification is still an open problem. This survey aims to tackle all the issues and challenging aspects of people reidentification while simultaneously describing the previously proposed solutions for the encountered problems. This begins with the first attempts of holistic descriptors and progresses to the more recently adopted 2D and 3D model-based approaches. The survey also includes an exhaustive treatise of all the aspects of people reidentification, including available datasets, evaluation metrics, and benchmarking.",
"title": ""
},
{
"docid": "pos:1840442_1",
"text": "We present two novel methods for face verification. Our first method - “attribute” classifiers - uses binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance (e.g., gender, race, and age). Our second method - “simile” classifiers - removes the manual labeling required for attribute classification and instead learns the similarity of faces, or regions of faces, to specific reference people. Neither method requires costly, often brittle, alignment between image pairs; yet, both methods produce compact visual descriptions, and work on real-world images. Furthermore, both the attribute and simile classifiers improve on the current state-of-the-art for the LFW data set, reducing the error rates compared to the current best by 23.92% and 26.34%, respectively, and 31.68% when combined. For further testing across pose, illumination, and expression, we introduce a new data set - termed PubFig - of real-world images of public figures (celebrities and politicians) acquired from the internet. This data set is both larger (60,000 images) and deeper (300 images per individual) than existing data sets of its kind. Finally, we present an evaluation of human performance.",
"title": ""
}
] | [
{
"docid": "neg:1840442_0",
"text": "More and more medicinal mushrooms have been widely used as a miraculous herb for health promotion, especially by cancer patients. Here we report screening thirteen mushrooms for anti-cancer cell activities in eleven different cell lines. Of the herbal products tested, we found that the extract of Amauroderma rude exerted the highest activity in killing most of these cancer cell lines. Amauroderma rude is a fungus belonging to the Ganodermataceae family. The Amauroderma genus contains approximately 30 species widespread throughout the tropical areas. Since the biological function of Amauroderma rude is unknown, we examined its anti-cancer effect on breast carcinoma cell lines. We compared the anti-cancer activity of Amauroderma rude and Ganoderma lucidum, the most well-known medicinal mushrooms with anti-cancer activity and found that Amauroderma rude had significantly higher activity in killing cancer cells than Ganoderma lucidum. We then examined the effect of Amauroderma rude on breast cancer cells and found that at low concentrations, Amauroderma rude could inhibit cancer cell survival and induce apoptosis. Treated cancer cells also formed fewer and smaller colonies than the untreated cells. When nude mice bearing tumors were injected with Amauroderma rude extract, the tumors grew at a slower rate than the control. Examination of these tumors revealed extensive cell death, decreased proliferation rate as stained by Ki67, and increased apoptosis as stained by TUNEL. Suppression of c-myc expression appeared to be associated with these effects. Taken together, Amauroderma rude represented a powerful medicinal mushroom with anti-cancer activities.",
"title": ""
},
{
"docid": "neg:1840442_1",
"text": "Many existing studies of social media focus on only one platform, but the reality of users' lived experiences is that most users incorporate multiple platforms into their communication practices in order to access the people and networks they desire to influence. In order to better understand how people make sharing decisions across multiple sites, we asked our participants (N=29) to categorize all modes of communication they used, with the goal of surfacing their mental models about managing sharing across platforms. Our interview data suggest that people simultaneously consider \"audience\" and \"content\" when sharing and these needs sometimes compete with one another; that they have the strong desire to both maintain boundaries between platforms as well as allowing content and audience to permeate across these boundaries; and that they strive to stabilize their own communication ecosystem yet need to respond to changes necessitated by the emergence of new tools, practices, and contacts. We unpack the implications of these tensions and suggest future design possibilities.",
"title": ""
},
{
"docid": "neg:1840442_2",
"text": "Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. The problem of associating semantic labels to utterances can be treated as a sequence labeling problem. In this work, we build a hierarchical recurrent neural network using bidirectional LSTM as a base unit and the conditional random field (CRF) as the top layer to classify each utterance into its corresponding dialogue act. The hierarchical network learns representations at multiple levels, i.e., word level, utterance level, and conversation level. The conversation level representations are input to the CRF layer, which takes into account not only all previous utterances but also their dialogue acts, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue. We validate our approach on two different benchmark data sets, Switchboard and Meeting Recorder Dialogue Act, and show performance improvement over the state-of-the-art methods by 2.2% and 4.1% absolute points, respectively. It is worth noting that the inter-annotator agreement on Switchboard data set is 84%, and our method is able to achieve the accuracy of about 79% despite being trained on the noisy data.",
"title": ""
},
{
"docid": "neg:1840442_3",
"text": "Autonomous vehicles are an emerging application of automotive technology. They can recognize the scene, plan the path, and control the motion by themselves while interacting with drivers. Although they receive considerable attention, components of autonomous vehicles are not accessible to the public but instead are developed as proprietary assets. To facilitate the development of autonomous vehicles, this article introduces an open platform using commodity vehicles and sensors. Specifically, the authors present algorithms, software libraries, and datasets required for scene recognition, path planning, and vehicle control. This open platform allows researchers and developers to study the basis of autonomous vehicles, design new algorithms, and test their performance using the common interface.",
"title": ""
},
{
"docid": "neg:1840442_4",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "neg:1840442_5",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "neg:1840442_6",
"text": "Good performance and efficiency, in terms of high quality of service and resource utilization for example, are important goals in a cloud environment. Through extensive measurements of an n-tier application benchmark (RUBBoS), we show that overall system performance is surprisingly sensitive to appropriate allocation of soft resources (e.g., server thread pool size). Inappropriate soft resource allocation can quickly degrade overall application performance significantly. Concretely, both under-allocation and over-allocation of thread pool can lead to bottlenecks in other resources because of non-trivial dependencies. We have observed some non-obvious phenomena due to these correlated bottlenecks. For instance, the number of threads in the Apache web server can limit the total useful throughput, causing the CPU utilization of the C-JDBC clustering middleware to decrease as the workload increases. We provide a practical iterative solution approach to this challenge through an algorithmic combination of operational queuing laws and measurement data. Our results show that soft resource allocation plays a central role in the performance scalability of complex systems such as n-tier applications in cloud environments.",
"title": ""
},
{
"docid": "neg:1840442_7",
"text": "Software Defined Network (SDN) is the latest network architecture in which the data and control planes do not reside on the same networking element. The control of packet forwarding in this architecture is taken out and is carried out by a programmable software component, the controller, whereas the forwarding elements are only used as packet moving devices that are driven by the controller. SDN architecture also provides Open APIs from both control and data planes. In order to provide communication between the controller and the forwarding hardware among many available protocols, OpenFlow (OF), is generally regarded as a standardized protocol for SDN. Open APIs for communication between the controller and applications enable development of network management applications easy. Therefore, SDN makes it possible to program the network thus provide numerous benefits. As a result, various vendors have developed SDN architectures. This paper summarizes as well as compares most of the common SDN architectures available till date.",
"title": ""
},
{
"docid": "neg:1840442_8",
"text": "A Deep-learning architecture is a representation learning method with multiple levels of abstraction. It finds out complex structure of nonlinear processing layer in large datasets for pattern recognition. From the earliest uses of deep learning, Convolution Neural Network (CNN) can be trained by simple mathematical method based gradient descent. One of the most promising improvement of CNN is the integration of intelligent heuristic algorithms for learning optimization. In this paper, we use the seven layer CNN, named ConvNet, for handwriting digit classification. The Particle Swarm Optimization algorithm (PSO) is adapted to evolve the internal parameters of processing layers.",
"title": ""
},
{
"docid": "neg:1840442_9",
"text": "BACKGROUND\nMore than two-thirds of pregnant women experience low-back pain and almost one-fifth experience pelvic pain. The two conditions may occur separately or together (low-back and pelvic pain) and typically increase with advancing pregnancy, interfering with work, daily activities and sleep.\n\n\nOBJECTIVES\nTo update the evidence assessing the effects of any intervention used to prevent and treat low-back pain, pelvic pain or both during pregnancy.\n\n\nSEARCH METHODS\nWe searched the Cochrane Pregnancy and Childbirth (to 19 January 2015), and the Cochrane Back Review Groups' (to 19 January 2015) Trials Registers, identified relevant studies and reviews and checked their reference lists.\n\n\nSELECTION CRITERIA\nRandomised controlled trials (RCTs) of any treatment, or combination of treatments, to prevent or reduce the incidence or severity of low-back pain, pelvic pain or both, related functional disability, sick leave and adverse effects during pregnancy.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently assessed trials for inclusion and risk of bias, extracted data and checked them for accuracy.\n\n\nMAIN RESULTS\nWe included 34 RCTs examining 5121 pregnant women, aged 16 to 45 years and, when reported, from 12 to 38 weeks' gestation. Fifteen RCTs examined women with low-back pain (participants = 1847); six examined pelvic pain (participants = 889); and 13 examined women with both low-back and pelvic pain (participants = 2385). Two studies also investigated low-back pain prevention and four, low-back and pelvic pain prevention. Diagnoses ranged from self-reported symptoms to clinicians' interpretation of specific tests. All interventions were added to usual prenatal care and, unless noted, were compared with usual prenatal care. The quality of the evidence ranged from moderate to low, raising concerns about the confidence we could put in the estimates of effect. For low-back painResults from meta-analyses provided low-quality evidence (study design limitations, inconsistency) that any land-based exercise significantly reduced pain (standardised mean difference (SMD) -0.64; 95% confidence interval (CI) -1.03 to -0.25; participants = 645; studies = seven) and functional disability (SMD -0.56; 95% CI -0.89 to -0.23; participants = 146; studies = two). Low-quality evidence (study design limitations, imprecision) also suggested no significant differences in the number of women reporting low-back pain between group exercise, added to information about managing pain, versus usual prenatal care (risk ratio (RR) 0.97; 95% CI 0.80 to 1.17; participants = 374; studies = two). For pelvic painResults from a meta-analysis provided low-quality evidence (study design limitations, imprecision) of no significant difference in the number of women reporting pelvic pain between group exercise, added to information about managing pain, and usual prenatal care (RR 0.97; 95% CI 0.77 to 1.23; participants = 374; studies = two). For low-back and pelvic painResults from meta-analyses provided moderate-quality evidence (study design limitations) that: an eight- to 12-week exercise program reduced the number of women who reported low-back and pelvic pain (RR 0.66; 95% CI 0.45 to 0.97; participants = 1176; studies = four); land-based exercise, in a variety of formats, significantly reduced low-back and pelvic pain-related sick leave (RR 0.76; 95% CI 0.62 to 0.94; participants = 1062; studies = two).The results from a number of individual studies, incorporating various other interventions, could not be pooled due to clinical heterogeneity. There was moderate-quality evidence (study design limitations or imprecision) from individual studies suggesting that osteomanipulative therapy significantly reduced low-back pain and functional disability, and acupuncture or craniosacral therapy improved pelvic pain more than usual prenatal care. Evidence from individual studies was largely of low quality (study design limitations, imprecision), and suggested that pain and functional disability, but not sick leave, were significantly reduced following a multi-modal intervention (manual therapy, exercise and education) for low-back and pelvic pain.When reported, adverse effects were minor and transient.\n\n\nAUTHORS' CONCLUSIONS\nThere is low-quality evidence that exercise (any exercise on land or in water), may reduce pregnancy-related low-back pain and moderate- to low-quality evidence suggesting that any exercise improves functional disability and reduces sick leave more than usual prenatal care. Evidence from single studies suggests that acupuncture or craniosacral therapy improves pregnancy-related pelvic pain, and osteomanipulative therapy or a multi-modal intervention (manual therapy, exercise and education) may also be of benefit.Clinical heterogeneity precluded pooling of results in many cases. Statistical heterogeneity was substantial in all but three meta-analyses, which did not improve following sensitivity analyses. Publication bias and selective reporting cannot be ruled out.Further evidence is very likely to have an important impact on our confidence in the estimates of effect and change the estimates. Studies would benefit from the introduction of an agreed classification system that can be used to categorise women according to their presenting symptoms, so that treatment can be tailored accordingly.",
"title": ""
},
{
"docid": "neg:1840442_10",
"text": "The present study examined the nature of social support exchanged within an online HIV/AIDS support group. Content analysis was conducted with reference to five types of social support (information support, tangible assistance, esteem support, network support, and emotional support) on 85 threads (1,138 messages). Our analysis revealed that many of the messages offered informational and emotional support, followed by esteem support and network support, with tangible assistance the least frequently offered. Results suggest that this online support group is a popular forum through which individuals living with HIV/AIDS can offer social support. Our findings have implications for health care professionals who support individuals living with HIV/AIDS.",
"title": ""
},
{
"docid": "neg:1840442_11",
"text": "Recommender systems have been researched extensively over the past decades. Whereas several algorithms have been developed and deployed in various application domains, recent research effort s are increasingly oriented towards the user experience of recommender systems. This research goes beyond accuracy of recommendation algorithms and focuses on various human factors that affect acceptance of recommendations, such as user satisfaction, trust, transparency and sense of control. In this paper, we present an interactive visualization framework that combines recommendation with visualization techniques to support human-recommender interaction. Then, we analyze existing interactive recommender systems along the dimensions of our framework, including our work. Based on our survey results, we present future research challenges and opportunities. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840442_12",
"text": "Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we provide self-contained simplified C99 versions of the bugs, facilitating understanding and tool evaluation. Our study provides insights into the nature and occurrence of variability bugs in a large C software system, and shows in what ways variability affects and increases the complexity of software bugs.",
"title": ""
},
{
"docid": "neg:1840442_13",
"text": "Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840442_14",
"text": "Gradient coils for magnetic resonance imaging (MRI) require large currents (> 500 A) for the gradient field strength, as well as high voltage (> 1600 V) for fast slew rates. Additionally, extremely high fidelity, reproducing the command signal, is critical for image quality. A new driver topology recently proposed can provide the high power and operate at high switching frequency allowing high bandwidth control. The paper presents additional improvements to the new driver architecture, and more importantly, describes the digital control design and implementation, crucial to achieve the required performance level. The power stage and control have been build and tested with the experimental results showing that the performance achieved with the new digital control capability, more than fulfills the system requirements",
"title": ""
},
{
"docid": "neg:1840442_15",
"text": "Electric Vehicle (EV) drivers have an urgent demand for fast battery refueling methods for long distance trip and emergency drive. A well-planned battery swapping station (BSS) network can be a promising solution to offer timely refueling services. However, an inappropriate battery recharging process in the BSS may not only violate the stabilization of the power grid by their large power consumption, but also increase the charging cost from the BSS operators' point of view. In this paper, we aim to obtain the optimal charging policy to minimize the charging cost while ensuring the quality of service (QoS) of the BSS. A novel queueing network model is proposed to capture the operation nature for an individual BSS. Based on practical assumptions, we formulate the charging schedule problem as a stochastic control problem and achieve the optimal charging policy by dynamic programming. Monte Carlo simulation is used to evaluate the performance of different policies for both stationary and non-stationary EV arrival cases. Numerical results show the importance of determining the number of total batteries and charging outlets held in the BSS. Our work gives insight for the future infrastructure planning and operational management of BSS network.",
"title": ""
},
{
"docid": "neg:1840442_16",
"text": "A multi-functional in-memory inference processor integrated circuit (IC) in a 65-nm CMOS process is presented. The prototype employs a deep in-memory architecture (DIMA), which enhances both energy efficiency and throughput over conventional digital architectures via simultaneous access of multiple rows of a standard 6T bitcell array (BCA) per precharge, and embedding column pitch-matched low-swing analog processing at the BCA periphery. In doing so, DIMA exploits the synergy between the dataflow of machine learning (ML) algorithms and the SRAM architecture to reduce the dominant energy cost due to data movement. The prototype IC incorporates a 16-kB SRAM array and supports four commonly used ML algorithms—the support vector machine, template matching, <inline-formula> <tex-math notation=\"LaTeX\">$k$ </tex-math></inline-formula>-nearest neighbor, and the matched filter. Silicon measured results demonstrate simultaneous gains (dot product mode) in energy efficiency of 10<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> and in throughput of 5.3<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> leading to a 53<inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> reduction in the energy-delay product with negligible (<inline-formula> <tex-math notation=\"LaTeX\">$\\le $ </tex-math></inline-formula>1%) degradation in the decision-making accuracy, compared with the conventional 8-b fixed-point single-function digital implementations.",
"title": ""
},
{
"docid": "neg:1840442_17",
"text": "This paper presents a novel application to detect counterfeit identity documents forged by a scan-printing operation. Texture analysis approaches are proposed to extract validation features from security background that is usually printed in documents as IDs or banknotes. The main contribution of this work is the end-to-end mobile-server architecture, which provides a service for non-expert users and therefore can be used in several scenarios. The system also provides a crowdsourcing mode so labeled images can be gathered, generating databases for incremental training of the algorithms.",
"title": ""
},
{
"docid": "neg:1840442_18",
"text": "In the manufacturing industry, supply chain management is playing an important role in providing profit to the enterprise. Information that is useful in improving existing products and development of new products can be obtained from databases and ontology. The theory of inventive problem solving (TRIZ) supports designers of innovative product design by searching a knowledge base. The existing TRIZ ontology supports innovative design of specific products (Flashlight) for a TRIZ ontology. The research reported in this paper aims at developing a metaontology for innovative product design that can be applied to multiple products in different domain areas. The authors applied the semantic TRIZ to a product (Smart Fan) as an interim stage toward a metaontology that can manage general products and other concepts. Modeling real-world (Smart Pen and Smart Machine) ontologies is undertaken as an evaluation of the metaontology. This may open up new possibilities to innovative product designs. Innovative Product Design using Metaontology with Semantic TRIZ",
"title": ""
},
{
"docid": "neg:1840442_19",
"text": "Online bookings of hotels have increased drastically throughout recent years. Studies in tourism and hospitality have investigated the relevance of hotel attributes influencing choice but did not yet explore them in an online booking setting. This paper presents findings about consumers’ stated preferences for decision criteria from an adaptive conjoint study among 346 respondents. The results show that recommendations of friends and online reviews are the most important factors that influence online hotel booking. Partitioning the importance values of the decision criteria reveals group-specific differences indicating the presence of market segments.",
"title": ""
}
] |
1840443 | Graph Analytics Through Fine-Grained Parallelism | [
{
"docid": "pos:1840443_0",
"text": "Iterative computations are pervasive among data analysis applications in the cloud, including Web search, online social network analysis, recommendation systems, and so on. These cloud applications typically involve data sets of massive scale. Fast convergence of the iterative computation on the massive data set is essential for these applications. In this paper, we explore the opportunity for accelerating iterative computations and propose a distributed computing framework, PrIter, which enables fast iterative computation by providing the support of prioritized iteration. Instead of performing computations on all data records without discrimination, PrIter prioritizes the computations that help convergence the most, so that the convergence speed of iterative process is significantly improved. We evaluate PrIter on a local cluster of machines as well as on Amazon EC2 Cloud. The results show that PrIter achieves up to 50x speedup over Hadoop for a series of iterative algorithms.",
"title": ""
},
{
"docid": "pos:1840443_1",
"text": "Hekaton is a new database engine optimized for memory resident data and OLTP workloads. Hekaton is fully integrated into SQL Server; it is not a separate system. To take advantage of Hekaton, a user simply declares a table memory optimized. Hekaton tables are fully transactional and durable and accessed using T-SQL in the same way as regular SQL Server tables. A query can reference both Hekaton tables and regular tables and a transaction can update data in both types of tables. T-SQL stored procedures that reference only Hekaton tables can be compiled into machine code for further performance improvements. The engine is designed for high con-currency. To achieve this it uses only latch-free data structures and a new optimistic, multiversion concurrency control technique. This paper gives an overview of the design of the Hekaton engine and reports some experimental results.",
"title": ""
},
{
"docid": "pos:1840443_2",
"text": "Many distributed storage systems achieve high data access throughput via partitioning and replication, each system with its own advantages and tradeoffs. In order to achieve high scalability, however, today's systems generally reduce transactional support, disallowing single transactions from spanning multiple partitions. Calvin is a practical transaction scheduling and data replication layer that uses a deterministic ordering guarantee to significantly reduce the normally prohibitive contention costs associated with distributed transactions. Unlike previous deterministic database system prototypes, Calvin supports disk-based storage, scales near-linearly on a cluster of commodity machines, and has no single point of failure. By replicating transaction inputs rather than effects, Calvin is also able to support multiple consistency levels---including Paxos-based strong consistency across geographically distant replicas---at no cost to transactional throughput.",
"title": ""
}
] | [
{
"docid": "neg:1840443_0",
"text": "This paper describes the development of two nine-storey elevators control system for a residential building. The control system adopts PLC as controller, and uses a parallel connection dispatching rule based on \"minimum waiting time\" to run two elevators in parallel mode. The paper gives the basic structure, control principle and realization method of the PLC control system in detail. It also presents the ladder diagram of the key aspects of the system. The system has simple peripheral circuit and the operation result showed that it enhanced the reliability and pe.rformance of the elevators.",
"title": ""
},
{
"docid": "neg:1840443_1",
"text": "OBJECTIVE\nTo evaluate the effectiveness of a functional thumb orthosis on the dominant hand of patients with rheumatoid arthritis and boutonniere thumb.\n\n\nMETHODS\nForty patients with rheumatoid arthritis and boutonniere deformity of the thumb were randomly distributed into two groups. The intervention group used the orthosis daily and the control group used the orthosis only during the evaluation. Participants were evaluated at baseline as well as after 45 and 90 days. Assessments were preformed using the O'Connor Dexterity Test, Jamar dynamometer, pinch gauge, goniometry and the Health Assessment Questionnaire. A visual analogue scale was used to assess thumb pain in the metacarpophalangeal joint.\n\n\nRESULTS\nPatients in the intervention group experienced a statistically significant reduction in pain. The thumb orthosis did not disrupt grip and pinch strength, function, Health Assessment Questionnaire score or dexterity in either group.\n\n\nCONCLUSION\nThe use of thumb orthosis for type I and type II boutonniere deformities was effective in relieving pain.",
"title": ""
},
{
"docid": "neg:1840443_2",
"text": "We present the design and implementation of a real-time, distributed light field camera. Our system allows multiple viewers to navigate virtual cameras in a dynamically changing light field that is captured in real-time. Our light field camera consists of 64 commodity video cameras that are connected to off-the-shelf computers. We employ a distributed rendering algorithm that allows us to overcome the data bandwidth problems inherent in dynamic light fields. Our algorithm works by selectively transmitting only those portions of the video streams that contribute to the desired virtual views. This technique not only reduces the total bandwidth, but it also allows us to scale the number of cameras in our system without increasing network bandwidth. We demonstrate our system with a number of examples.",
"title": ""
},
{
"docid": "neg:1840443_3",
"text": "Familial gigantiform cementoma (FGC) is a rare autosomal dominant, benign fibro-cemento-osseous lesion of the jaws that can cause severe facial deformity. True FGC with familial history is extremely rare and there has been no literature regarding the radiological follow-up of FGC. We report a case of recurrent FGC in an Asian female child who has been under our observation for 6 years since she was 15 months old. After repeated recurrences and subsequent surgeries, the growth of the tumor had seemed to plateau on recent follow-up CT images. The transition from an enhancing soft tissue lesion to a homogeneous bony lesion on CT may indicate decreased growth potential of FGC.",
"title": ""
},
{
"docid": "neg:1840443_4",
"text": "Despite the profusion of NIALM researches and products using complex algorithms, addressing the market for low cost, compact, real-time and effective NIALM smart meters is still a challenge. This paper talks about the design of a NIALM smart meter for home appliances, with the ability to self-detect and disaggregate most home appliances. In order to satisfy the compact, real-time, low price requirements and to solve the challenge in slow transient and multi-state appliances, two algorithms are used: the CUSUM to improve the event detection and the Genetic Algorithm (GA) for appliance disaggregation. Evaluation of these algorithms has been done according to public NIALM REDD data set [6]. They are now in first stage of architecture design using Labview FPGA methodology. KeywordsNIALM, CUSUM, Genetic Algorithm, K-mean, classification, smart meter, FPGA.",
"title": ""
},
{
"docid": "neg:1840443_5",
"text": "The paper analyzes some forms of linguistic ambiguity in English in a specific register, i.e. newspaper headlines. In particular, the focus of the research is on examples of lexical and syntactic ambiguity that result in sources of voluntary or involuntary humor. The study is based on a corpus of 135 verbally ambiguous headlines found on web sites presenting humorous bits of information. The linguistic phenomena that contribute to create this kind of semantic confusion in headlines will be analyzed and divided into the three main categories of lexical, syntactic, and phonological ambiguity, and examples from the corpus will be discussed for each category. The main results of the study were that, firstly, contrary to the findings of previous research on jokes, syntactically ambiguous headlines were found in good percentage in the corpus and that this might point to di¤erences in genre. Secondly, two new configurations for the processing of the disjunctor/connector order were found. In the first of these configurations the disjunctor appears before the connector, instead of being placed after or coinciding with the ambiguous element, while in the second one two ambiguous elements are present, each of which functions both as a connector and",
"title": ""
},
{
"docid": "neg:1840443_6",
"text": "Software defined radio (SDR) is a rapidly evolving technology which implements some functional modules of a radio system in software executing on a programmable processor. SDR provides a flexible mechanism to reconfigure the radio, enabling networked devices to easily adapt to user preferences and the operating environment. However, the very mechanisms that provide the ability to reconfigure the radio through software also give rise to serious security concerns such as unauthorized modification of the software, leading to radio malfunction and interference with other users' communications. Both the SDR device and the network need to be protected from such malicious radio reconfiguration.\n In this paper, we propose a new architecture to protect SDR devices from malicious reconfiguration. The proposed architecture is based on robust separation of the radio operation environment and user application environment through the use of virtualization. A secure radio middleware layer is used to intercept all attempts to reconfigure the radio, and a security policy monitor checks the target configuration against security policies that represent the interests of various parties. Therefore, secure reconfiguration can be ensured in the radio operation environment even if the operating system in the user application environment is compromised. We have prototyped the proposed secure SDR architecture using VMware and the GNU Radio toolkit, and demonstrate that the overheads incurred by the architecture are small and tolerable. Therefore, we believe that the proposed solution could be applied to address SDR security concerns in a wide range of both general-purpose and embedded computing systems.",
"title": ""
},
{
"docid": "neg:1840443_7",
"text": "We propose a novel adaptive test of goodness-of-fit, with computational cost linear in the number of samples. We learn the test features that best indicate the differences between observed samples and a reference model, by minimizing the false negative rate. These features are constructed via Stein’s method, meaning that it is not necessary to compute the normalising constant of the model. We analyse the asymptotic Bahadur efficiency of the new test, and prove that under a mean-shift alternative, our test always has greater relative efficiency than a previous linear-time kernel test, regardless of the choice of parameters for that test. In experiments, the performance of our method exceeds that of the earlier linear-time test, and matches or exceeds the power of a quadratic-time kernel test. In high dimensions and where model structure may be exploited, our goodness of fit test performs far better than a quadratic-time two-sample test based on the Maximum Mean Discrepancy, with samples drawn from the model.",
"title": ""
},
{
"docid": "neg:1840443_8",
"text": "Wireless access to the Internet via PDAs (personal digital assistants) provides Web type services in the mobile world. What we are lacking are design guidelines for such PDA services. For Web publishing, however, there are many resources to look for guidelines. The guidelines can be classified according to which aspect of the Web media they are related: software/hardware, content and its organization, or aesthetics and layout. In order to be applicable to PDA services, these guidelines have to be modified. In this paper we analyze the main characteristics of PDAs and their influence to the guidelines.",
"title": ""
},
{
"docid": "neg:1840443_9",
"text": "Purpose: The purpose of this paper is to perform a systematic review of articles that have used the unified theory of acceptance and use of technology (UTAUT). Design/methodology/approach: The results produced in this research are based on the literature analysis of 174 existing articles on the UTAUT model. This has been performed by collecting data including demographic details, methodological details, limitations, and significance of relationships between the constructs from the available articles based on the UTAUT. Findings: The findings were categorised by dividing the articles that used the UTAUT model into types of information systems used, research approach and methods employed, and tools and techniques implemented to analyse results. We also perform the weight analysis of variables and found that performance expectancy and behavioural intention qualified for the best predictor category. The research also analysed and presented the limitations of existing studies. Research limitations/implications: The search activities were centered on occurrences of keywords to avoid tracing a large number of publications where these keywords might have been used as casual words in the main text. However, we acknowledge that there may be a number of studies, which lack keywords in the title, but still focus upon UTAUT in some form. Originality/value: This is the first research of its type, which has extensively examined the literature on the UTAUT and provided the researchers with the accumulative knowledge about the model.",
"title": ""
},
{
"docid": "neg:1840443_10",
"text": "Complex queries over high speed data streams often need to rely on approximations to keep up with their input. The research community has developed a rich literature on approximate streaming algorithms for this application. Many of these algorithms produce samples of the input stream, providing better properties than conventional random sampling. In this paper, we abstract the stream sampling process and design a new stream sample operator. We show how it can be used to implement a wide variety of algorithms that perform sampling and sampling-based aggregations. Also, we show how to implement the operator in Gigascope - a high speed stream database specialized for IP network monitoring applications. As an example study, we apply the operator within such an enhanced Gigascope to perform subset-sum sampling which is of great interest for IP network management. We evaluate this implemention on a live, high speed internet traffic data stream and find that (a) the operator is a flexible, versatile addition to Gigascope suitable for tuning and algorithm engineering, and (b) the operator imposes only a small evaluation overhead. This is the first operational implementation we know of, for a wide variety of stream sampling algorithms at line speed within a data stream management system.",
"title": ""
},
{
"docid": "neg:1840443_11",
"text": "This clinical trial assessed the ability of Gluma Dentin Bond to inhibit dentinal sensitivity in teeth prepared to receive complete cast restorations. Twenty patients provided 76 teeth for the study. Following tooth preparation, dentinal surfaces were coated with either sterile water (control) or two 30-second applications of Gluma Dentin Bond (test) on either intact or removed smear layers. Patients were recalled after 14 days for a test of sensitivity of the prepared dentin to compressed air, osmotic stimulus (saturated CaCl2 solution), and tactile stimulation via a scratch test under controlled loads. A significantly lower number of teeth responded to the test stimuli for both Gluma groups when compared to the controls (P less than .01). No difference was noted between teeth with smear layers intact or removed prior to treatment with Gluma.",
"title": ""
},
{
"docid": "neg:1840443_12",
"text": "Ternary logic is a promising alternative to the conventional binary logic in VLSI design as it provides the advantages of reduced interconnects, higher operating speeds, and smaller chip area. This paper presents a pair of circuits for implementing a ternary half adder using carbon nanotube field-effect transistors. The proposed designs combine both futuristic ternary and conventional binary logic design approach. One of the proposed circuits for ternary to binary decoder simplifies further circuit implementation and provides excellent delay and power advantages in data path circuit such as adder. These circuits have been extensively simulated using HSPICE to obtain power, delay, and power delay product. The circuit performances are compared with alternative designs reported in recent literature. One of the proposed ternary adders has been demonstrated power, power delay product improvement up to 63% and 66% respectively, with lesser transistor count. So, the use of these half adders in complex arithmetic circuits will be advantageous.",
"title": ""
},
{
"docid": "neg:1840443_13",
"text": "Chirp signals are very common in radar, communication, sonar, and etc. Little is known about chirp images, i.e., 2-D chirp signals. In fact, such images frequently appear in optics and medical science. Newton's rings fringe pattern is a classical example of the images, which is widely used in optical metrology. It is known that the fractional Fourier transform(FRFT) is a convenient method for processing chirp signals. Furthermore, it can be extended to 2-D fractional Fourier transform for processing 2-D chirp signals. It is interesting to observe the chirp images in the 2-D fractional Fourier transform domain and extract some physical parameters hidden in the images. Besides that, in the FRFT domain, it is easy to separate the 2-D chirp signal from other signals to obtain the desired image.",
"title": ""
},
{
"docid": "neg:1840443_14",
"text": "Systematic procedure is described for designing bandpass filters with wide bandwidths based on parallel coupled three-line microstrip structures. It is found that the tight gap sizes between the resonators of end stages and feed lines, required for wideband filters based on traditional coupled line design, can be greatly released. The relation between the circuit parameters of a three-line coupling section and an admittance inverter circuit is derived. A design graph for substrate with /spl epsiv//sub r/=10.2 is provided. Two filters of orders 3 and 5 with fractional bandwidths 40% and 50%, respectively, are fabricated and measured. Good agreement between prediction and measurement is obtained.",
"title": ""
},
{
"docid": "neg:1840443_15",
"text": "A new perspective on the topic of antibiotic resistance is beginning to emerge based on a broader evolutionary and ecological understanding rather than from the traditional boundaries of clinical research of antibiotic-resistant bacterial pathogens. Phylogenetic insights into the evolution and diversity of several antibiotic resistance genes suggest that at least some of these genes have a long evolutionary history of diversification that began well before the 'antibiotic era'. Besides, there is no indication that lateral gene transfer from antibiotic-producing bacteria has played any significant role in shaping the pool of antibiotic resistance genes in clinically relevant and commensal bacteria. Most likely, the primary antibiotic resistance gene pool originated and diversified within the environmental bacterial communities, from which the genes were mobilized and penetrated into taxonomically and ecologically distant bacterial populations, including pathogens. Dissemination and penetration of antibiotic resistance genes from antibiotic producers were less significant and essentially limited to other high G+C bacteria. Besides direct selection by antibiotics, there is a number of other factors that may contribute to dissemination and maintenance of antibiotic resistance genes in bacterial populations.",
"title": ""
},
{
"docid": "neg:1840443_16",
"text": "Simulink Stateflow is widely used for the model-driven development of software. However, the increasing demand of rigorous verification for safety critical applications brings new challenge to the Simulink Stateflow because of the lack of formal semantics. In this paper, we present STU, a self-contained toolkit to bridge the Simulink Stateflow and a well-defined rigorous verification. The tool translates the Simulink Stateflow into the Uppaal timed automata for verification. Compared to existing work, more advanced and complex modeling features in Stateflow such as the event stack, conditional action and timer are supported. Then, with the strong verification power of Uppaal, we can not only find design defects that are missed by the Simulink Design Verifier, but also check more important temporal properties. The evaluation on artificial examples and real industrial applications demonstrates the effectiveness.",
"title": ""
},
{
"docid": "neg:1840443_17",
"text": "Implant surgery in mandibular anterior region may turn from an easy minor surgery into a complicated one for the surgeon, due to inadequate knowledge of the anatomy of the surgical area and/or ignorance toward the required surgical protocol. Hence, the purpose of this article is to present an overview on the: (a) Incidence of massive bleeding and its consequences after implant placement in mandibular anterior region. (b) Its etiology, the precautionary measures to be taken to avoid such an incidence in clinical practice and management of such a hemorrhage if at all happens. An inclusion criterion for selection of article was defined, and an electronic Medline search through different database using different keywords and manual search in journals and books was executed. Relevant articles were selected based upon inclusion criteria to form the valid protocols for implant surgery in the anterior mandible. Further, from the selected articles, 21 articles describing case reports were summarized separately in a table to alert the dental surgeons about the morbidity they could come across while operating in this region. If all the required adequate measures for diagnosis and treatment planning are taken and appropriate surgical protocol is followed, mandibular anterior region is no doubt a preferable area for implant placement.",
"title": ""
},
{
"docid": "neg:1840443_18",
"text": "In this study, we intend to identify the evolutionary footprints of the South Iberian population focusing on the Berber and Arab influence, which has received little attention in the literature. Analysis of the Y-chromosome variation represents a convenient way to assess the genetic contribution of North African populations to the present-day South Iberian genetic pool and could help to reconstruct other demographic events that could have influenced on that region. A total of 26 Y-SNPs and 17 Y-STRs were genotyped in 144 samples from 26 different districts of South Iberia in order to assess the male genetic composition and the level of substructure of male lineages in this area. To obtain a more comprehensive picture of the genetic structure of the South Iberian region as a whole, our data were compared with published data on neighboring populations. Our analyses allow us to confirm the specific impact of the Arab and Berber expansion and dominion of the Peninsula. Nevertheless, our results suggest that this influence is not bigger in Andalusia than in other Iberian populations.",
"title": ""
},
{
"docid": "neg:1840443_19",
"text": "We present SNIPER, an algorithm for performing efficient multi-scale training in instance level visual recognition tasks. Instead of processing every pixel in an image pyramid, SNIPER processes context regions around ground-truth instances (referred to as chips) at the appropriate scale. For background sampling, these context-regions are generated using proposals extracted from a region proposal network trained with a short learning schedule. Hence, the number of chips generated per image during training adaptively changes based on the scene complexity. SNIPER only processes 30% more pixels compared to the commonly used single scale training at 800x1333 pixels on the COCO dataset. But, it also observes samples from extreme resolutions of the image pyramid, like 1400x2000 pixels. As SNIPER operates on resampled low resolution chips (512x512 pixels), it can have a batch size as large as 20 on a single GPU even with a ResNet-101 backbone. Therefore it can benefit from batch-normalization during training without the need for synchronizing batch-normalization statistics across GPUs. SNIPER brings training of instance level recognition tasks like object detection closer to the protocol for image classification and suggests that the commonly accepted guideline that it is important to train on high resolution images for instance level visual recognition tasks might not be correct. Our implementation based on Faster-RCNN with a ResNet-101 backbone obtains an mAP of 47.6% on the COCO dataset for bounding box detection and can process 5 images per second during inference with a single GPU. Code is available at https://github.com/mahyarnajibi/SNIPER/.",
"title": ""
}
] |
1840444 | Densely Connected Convolutional Neural Network for Multi-purpose Image Forensics under Anti-forensic Attacks | [
{
"docid": "pos:1840444_0",
"text": "Image forensics has attracted wide attention during the past decade. However, most existing works aim at detecting a certain operation, which means that their proposed features usually depend on the investigated image operation and they consider only binary classification. This usually leads to misleading results if irrelevant features and/or classifiers are used. For instance, a JPEG decompressed image would be classified as an original or median filtered image if it was fed into a median filtering detector. Hence, it is important to develop forensic methods and universal features that can simultaneously identify multiple image operations. Based on extensive experiments and analysis, we find that any image operation, including existing anti-forensics operations, will inevitably modify a large number of pixel values in the original images. Thus, some common inherent statistics such as the correlations among adjacent pixels cannot be preserved well. To detect such modifications, we try to analyze the properties of local pixels within the image in the residual domain rather than the spatial domain considering the complexity of the image contents. Inspired by image steganalytic methods, we propose a very compact universal feature set and then design a multiclass classification scheme for identifying many common image operations. In our experiments, we tested the proposed features as well as several existing features on 11 typical image processing operations and four kinds of anti-forensic methods. The experimental results show that the proposed strategy significantly outperforms the existing forensic methods in terms of both effectiveness and universality.",
"title": ""
},
{
"docid": "pos:1840444_1",
"text": "This paper proposes a new, conceptually simple and effective forensic method to address both the generality and the fine-grained tampering localization problems of image forensics. Corresponding to each kind of image operation, a rich GMM (Gaussian Mixture Model) is learned as the image statistical model for small image patches. Thereafter, the binary classification problem, whether a given image block has been previously processed, can be solved by comparing the average patch log-likelihood values calculated on overlapping image patches under different GMMs of original and processed images. With comparisons to a powerful steganalytic feature, experimental results demonstrate the efficiency of the proposed method, for multiple image operations, on whole images and small blocks.",
"title": ""
}
] | [
{
"docid": "neg:1840444_0",
"text": "Anvil is a tool for the annotation of audiovisual material containing multimodal dialogue. Annotation takes place on freely definable, multiple layers (tracks) by inserting time-anchored elements that hold a number of typed attribute-value pairs. Higher-level elements (suprasegmental) consist of a sequence of elements. Attributes contain symbols or cross-level links to arbitrary other elements. Anvil is highly generic (usable with different annotation schemes), platform-independent, XMLbased and fitted with an intuitive graphical user interface. For project integration, Anvil offers the import of speech transcription and export of text and table data for further statistical processing.",
"title": ""
},
{
"docid": "neg:1840444_1",
"text": "Sentiment Analysis(SA) is a combination of emotions, opinions and subjectivity of text. Today, social networking sites like Twitter are tremendously used in expressing the opinions about a particular entity in the form of tweets which are limited to 140 characters. Reviews and opinions play a very important role in understanding peoples satisfaction regarding a particular entity. Such opinions have high potential for knowledge discovery. The main target of SA is to find opinions from tweets, extract sentiments from them and then define their polarity, i.e, positive, negative or neutral. Most of the work in this domain has been done for English Language. In this paper, we discuss and propose sentiment analysis using Hindi language. We will discuss an unsupervised lexicon method for classification.",
"title": ""
},
{
"docid": "neg:1840444_2",
"text": "In this paper, a novel technique to enhance the bandwidth of substrate integrated waveguide cavity backed slot antenna is demonstrated. The feeding technique to the cavity backed antenna has been modified by introducing offset feeding of microstrip line along with microstrip to grounded coplanar waveguide transition which helps to excite TE120 mode in the cavity and also to get improvement in impedance matching to the slot antenna simultaneously. The proposed antenna is designed to resonate in X band (8-12 GHz) and shows a resonance at 10.2 GHz with a bandwidth of 4.2% and a gain of 5.6 dBi, 15.6 dB front to back ratio and -30 dB maximum cross polarization level.",
"title": ""
},
{
"docid": "neg:1840444_3",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "neg:1840444_4",
"text": "A large number of problems in AI and other areas of computer science can be viewed as special cases of the constraint-satisfaction problem. Some examples are machine vision, belief maintenance, scheduling, temporal reasoning, graph problems, floor plan design, the planning of genetic experiments, and the satisfiability problem. A number of different approaches have been developed for solving these problems. Some of them use constraint propagation to simplify the original problem. Others use backtracking to directly search for possible solutions. Some are a combination of these two techniques. This article overviews many of these approaches in a tutorial fashion. Articles",
"title": ""
},
{
"docid": "neg:1840444_5",
"text": "The importance of cost planning for solid waste management (SWM) in industrialising regions (IR) is not well recognised. The approaches used to estimate costs of SWM can broadly be classified into three categories - the unit cost method, benchmarking techniques and developing cost models using sub-approaches such as cost and production function analysis. These methods have been developed into computer programmes with varying functionality and utility. IR mostly use the unit cost and benchmarking approach to estimate their SWM costs. The models for cost estimation, on the other hand, are used at times in industrialised countries, but not in IR. Taken together, these approaches could be viewed as precedents that can be modified appropriately to suit waste management systems in IR. The main challenges (or problems) one might face while attempting to do so are a lack of cost data, and a lack of quality for what data do exist. There are practical benefits to planners in IR where solid waste problems are critical and budgets are limited.",
"title": ""
},
{
"docid": "neg:1840444_6",
"text": "Many emerging applications such as intruder detection and border protection drive the fast increasing development of device-free passive (DfP) localization techniques. In this paper, we present Pilot, a Channel State Information (CSI)-based DfP indoor localization system in WLAN. Pilot design is motivated by the observations that PHY layer CSI is capable of capturing the environment variance due to frequency diversity of wideband channel, such that the position where the entity located can be uniquely identified by monitoring the CSI feature pattern shift. Therefore, a ``passive'' radio map is constructed as prerequisite which include fingerprints for entity located in some crucial reference positions, as well as clear environment. Unlike device-based approaches that directly percepts the current state of entities, the first challenge for DfP localization is to detect their appearance in the area of interest. To this end, we design an essential anomaly detection block as the localization trigger relying on the CSI feature shift when entity emerges. Afterwards, a probabilistic algorithm is proposed to match the abnormal CSI to the fingerprint database to estimate the positions of potential existing entities. Finally, a data fusion block is developed to address the multiple entities localization challenge. We have implemented Pilot system with commercial IEEE 802.11n NICs and evaluated the performance in two typical indoor scenarios. It is shown that our Pilot system can greatly outperform the corresponding best RSS-based scheme in terms of anomaly detection and localization accuracy.",
"title": ""
},
{
"docid": "neg:1840444_7",
"text": "The Domain Name System (DNS) is an essential network infrastructure component since it supports the operation of the Web, Email, Voice over IP (VoIP) and other business- critical applications running over the network. Events that compromise the security of DNS can have a significant impact on the Internet since they can affect its availability and its intended operation. This paper describes algorithms used to monitor and detect certain types of attacks to the DNS infrastructure using flow data. Our methodology is based on algorithms that do not rely on known signature attack vectors. The effectiveness of our solution is illustrated with real and simulated traffic examples. In one example, we were able to detect a tunneling attack well before the appearance of public reports of it.",
"title": ""
},
{
"docid": "neg:1840444_8",
"text": "A Ku-band 200-W pulsed solid-state power amplifier has been presented and designed by using a hybrid radial-/rectangular-waveguide spatially power-combining technique. The hybrid radial-/rectangular-waveguide power-dividing/power-combining circuit employed in this design provides not only a high power-combining efficiency over a wide bandwidth but also efficient heat sinking for the active power devices. A simple design approach of the presented power-dividing/power-combining structure has been developed. The measured small-signal gain of the pulsed power amplifier is about 51.3 dB over the operating frequency range, while the measured maximum output power at 1-dB compression is 209 W at 13.9 GHz, with an active power-combining efficiency of about 91%. Furthermore, the active power-combining efficiency is greater than 82% from 13.75 to 14.5 GHz.",
"title": ""
},
{
"docid": "neg:1840444_9",
"text": "Support Vector Machines SVMs have proven to be highly e ective for learning many real world datasets but have failed to establish them selves as common machine learning tools This is partly due to the fact that they are not easy to implement and their standard imple mentation requires the use of optimization packages In this paper we present simple iterative algorithms for training support vector ma chines which are easy to implement and guaranteed to converge to the optimal solution Furthermore we provide a technique for automati cally nding the kernel parameter and best learning rate Extensive experiments with real datasets are provided showing that these al gorithms compare well with standard implementations of SVMs in terms of generalisation accuracy and computational cost while being signi cantly simpler to implement",
"title": ""
},
{
"docid": "neg:1840444_10",
"text": "Precise measurement of the local position of moveable targets in three dimensions is still considered to be a challenge. With the presented local position measurement technology, a novel system, consisting of small and lightweight measurement transponders and a number of fixed base stations, is introduced. The system is operating in the 5.8-GHz industrial-scientific-medical band and can handle up to 1000 measurements per second with accuracies down to a few centimeters. Mathematical evaluation is based on a mechanical equivalent circuit. Measurement results obtained with prototype boards demonstrate the feasibility of the proposed technology in a practical application at a race track.",
"title": ""
},
{
"docid": "neg:1840444_11",
"text": "This paper describes a monocular vision based parking-slot-markings recognition algorithm, which is used to automate the target position selection of automatic parking assist system. Peak-pair detection and clustering in Hough space recognize marking lines. Specially, one-dimensional filter in Hough space is designed to utilize a priori knowledge about the characteristics of marking lines in bird's eye view edge image. Modified distance between point and line-segment is used to distinguish guideline from recognized marking line-segments. Once the guideline is successfully recognized, T-shape template matching easily recognizes dividing marking line-segments. Experiments show that proposed algorithm successfully recognizes parking slots even when adjacent vehicles occlude parking-slot-markings severely",
"title": ""
},
{
"docid": "neg:1840444_12",
"text": "Face detection constitutes a key visual information analysis task in Machine Learning. The rise of Big Data has resulted in the accumulation of a massive volume of visual data which requires proper and fast analysis. Deep Learning methods are powerful approaches towards this task as training with large amounts of data exhibiting high variability has been shown to significantly enhance their effectiveness, but often requires expensive computations and leads to models of high complexity. When the objective is to analyze visual content in massive datasets, the complexity of the model becomes crucial to the success of the model. In this paper, a lightweight deep Convolutional Neural Network (CNN) is introduced for the purpose of face detection, designed with a view to minimize training and testing time, and outperforms previously published deep convolutional networks in this task, in terms of both effectiveness and efficiency. To train this lightweight deep network without compromising its efficiency, a new training method of progressive positive and hard negative sample mining is introduced and shown to drastically improve training speed and accuracy. Additionally, a separate deep network was trained to detect individual facial features and a model that combines the outputs of the two networks was created and evaluated. Both methods are capable of detecting faces under severe occlusion and unconstrained pose variation and meet the difficulties of large scale real-world, real-time face detection, and are suitable for deployment even in mobile environments such as Unmanned Aerial Vehicles (UAVs).",
"title": ""
},
{
"docid": "neg:1840444_13",
"text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.",
"title": ""
},
{
"docid": "neg:1840444_14",
"text": "Contractile myocytes provide a test of the hypothesis that cells sense their mechanical as well as molecular microenvironment, altering expression, organization, and/or morphology accordingly. Here, myoblasts were cultured on collagen strips attached to glass or polymer gels of varied elasticity. Subsequent fusion into myotubes occurs independent of substrate flexibility. However, myosin/actin striations emerge later only on gels with stiffness typical of normal muscle (passive Young's modulus, E approximately 12 kPa). On glass and much softer or stiffer gels, including gels emulating stiff dystrophic muscle, cells do not striate. In addition, myotubes grown on top of a compliant bottom layer of glass-attached myotubes (but not softer fibroblasts) will striate, whereas the bottom cells will only assemble stress fibers and vinculin-rich adhesions. Unlike sarcomere formation, adhesion strength increases monotonically versus substrate stiffness with strongest adhesion on glass. These findings have major implications for in vivo introduction of stem cells into diseased or damaged striated muscle of altered mechanical composition.",
"title": ""
},
{
"docid": "neg:1840444_15",
"text": "The prediction task in national language processing means to guess the missing letter, word, phrase, or sentence that likely follow in a given segment of a text. Since 1980s many systems with different methods were developed for different languages. In this paper an overview of the existing prediction methods that have been used for more than two decades are described and a general classification of the approaches is presented. The three main categories of the classification are statistical modeling, knowledge-based modeling, and heuristic modeling (adaptive).",
"title": ""
},
{
"docid": "neg:1840444_16",
"text": "Context awareness was introduced recently in several fields in quotidian human activities. Among context aware applications, health care systems are the most important ones. Such applications, in order to perceive the context, rely on sensors which may be physical or virtual. However, these applications lack of standardization in handling the context and the perceived sensors data. In this work, we propose a formal context aware application architecture model to deal with the context taking into account the scalability and interoperability as key features towards an abstraction of the context relatively to end user applications. As a proof of concept, we present also a case study and simulation explaining the operational aspect of this architecture in health care systems.",
"title": ""
},
{
"docid": "neg:1840444_17",
"text": "This chapter presents the fundamentals and applications of the State Machine Replication (SMR) technique for implementing consistent fault-tolerant services. Our focus here is threefold. First we present some fundamentals about distributed computing and three “practical” SMR protocols for different fault models. Second, we discuss some recent work aiming to improve the performance, modularity and robustness of SMR protocols. Finally, we present some prominent applications for SMR and an example of the real code needed for implementing a dependable service using the BFT-SMART replication library.",
"title": ""
},
{
"docid": "neg:1840444_18",
"text": "A systematic method for deriving soft-switching three-port converters (TPCs), which can interface multiple energy, is proposed in this paper. Novel full-bridge (FB) TPCs featuring single-stage power conversion, reduced conduction loss, and low-voltage stress are derived. Two nonisolated bidirectional power ports and one isolated unidirectional load port are provided by integrating an interleaved bidirectional Buck/Boost converter and a bridgeless Boost rectifier via a high-frequency transformer. The switching bridges on the primary side are shared; hence, the number of active switches is reduced. Primary-side pulse width modulation and secondary-side phase shift control strategy are employed to provide two control freedoms. Voltage and power regulations over two of the three power ports are achieved. Furthermore, the current/voltage ripples on the primary-side power ports are reduced due to the interleaving operation. Zero-voltage switching and zero-current switching are realized for the active switches and diodes, respectively. A typical FB-TPC with voltage-doubler rectifier developed by the proposed method is analyzed in detail. Operation principles, control strategy, and characteristics of the FB-TPC are presented. Experiments have been carried out to demonstrate the feasibility and effectiveness of the proposed topology derivation method.",
"title": ""
},
{
"docid": "neg:1840444_19",
"text": "When a mesh of simplicial elements (triangles or tetrahedra) is used to form a piecewise linear approximation of a function, the accuracy of the approximation depends on the sizes and shapes of the elements. In finite element methods, the conditioning of the stiffness matrices also depends on the sizes and shapes of the elements. This paper explains the mathematical connections between mesh geometry, interpolation errors, and stiffness matrix conditioning. These relationships are expressed by error bounds and element quality measures that determine the fitness of a triangle or tetrahedron for interpolation or for achieving low condition numbers. Unfortunately, the quality measures for these two purposes do not agree with each other; for instance, small angles are bad for matrix conditioning but not for interpolation. Several of the upper and lower bounds on interpolation errors and element stiffness matrix conditioning given here are tighter than those that have appeared in the literature before, so the quality measures are likely to be unusually precise indicators of element fitness.",
"title": ""
}
] |
1840445 | Time for a paradigm change in meniscal repair: save the meniscus! | [
{
"docid": "pos:1840445_0",
"text": "PURPOSE\nTo systematically review the results of arthroscopic transtibial pullout repair (ATPR) for posterior medial meniscus root tears.\n\n\nMETHODS\nA systematic electronic search of the PubMed database and the Cochrane Library was performed in September 2014 to identify studies that reported clinical, radiographic, or second-look arthroscopic outcomes of ATPR for posterior medial meniscus root tears. Included studies were abstracted regarding study characteristics, patient demographic characteristics, surgical technique, rehabilitation, and outcome measures. The methodologic quality of the included studies was assessed with the modified Coleman Methodology Score.\n\n\nRESULTS\nSeven studies with a total of 172 patients met the inclusion criteria. The mean patient age was 55.3 years, and 83% of patients were female patients. Preoperative and postoperative Lysholm scores were reported for all patients. After a mean follow-up period of 30.2 months, the Lysholm score increased from 52.4 preoperatively to 85.9 postoperatively. On conventional radiographs, 64 of 76 patients (84%) showed no progression of Kellgren-Lawrence grading. Magnetic resonance imaging showed no progression of cartilage degeneration in 84 of 103 patients (82%) and showed reduced medial meniscal extrusion in 34 of 61 patients (56%). On the basis of second-look arthroscopy and magnetic resonance imaging in 137 patients, the healing status was rated as complete in 62%, partial in 34%, and failed in 3%. Overall, the methodologic quality of the included studies was fair, with a mean modified Coleman Methodology Score of 63.\n\n\nCONCLUSIONS\nATPR significantly improves functional outcome scores and seems to prevent the progression of osteoarthritis in most patients, at least during a short-term follow-up. Complete healing of the repaired root and reduction of meniscal extrusion seem to be less predictable, being observed in only about 60% of patients. Conclusions about the progression of osteoarthritis and reduction of meniscal extrusion are limited by the small portion of patients undergoing specific evaluation (44% and 35% of the study group, respectively).\n\n\nLEVEL OF EVIDENCE\nLevel IV, systematic review of Level III and IV studies.",
"title": ""
}
] | [
{
"docid": "neg:1840445_0",
"text": "Copyright: © 2018 The Author(s) Abstract. In the last few years, leading-edge research from information systems, strategic management, and economics have separately informed our understanding of platforms and infrastructures in the digital age. Our motivation for undertaking this special issue rests in the conviction that it is significant to discuss platforms and infrastructures concomitantly, while enabling knowledge from diverse disciplines to cross-pollinate to address critical, pressing policy challenges and inform strategic thinking across both social and business spheres. In this editorial, we review key insights from the literature on digital infrastructures and platforms, present emerging research themes, highlight the contributions developed from each of the six articles in this special issue, and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "neg:1840445_1",
"text": "Grasping-force optimization of multifingered robotic hands can be formulated as a problem for minimizing an objective function subject to form-closure constraints and balance constraints of external force. This paper presents a novel recurrent neural network for real-time dextrous hand-grasping force optimization. The proposed neural network is shown to be globally convergent to the optimal grasping force. Compared with existing approaches to grasping-force optimization, the proposed neural-network approach has the advantages that the complexity for implementation is reduced, and the solution accuracy is increased, by avoiding the linearization of quadratic friction constraints. Simulation results show that the proposed neural network can achieve optimal grasping force in real time.",
"title": ""
},
{
"docid": "neg:1840445_2",
"text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.",
"title": ""
},
{
"docid": "neg:1840445_3",
"text": "Reaction of WF6 with air-exposed 27and 250-nm-thick Ti films has been studied using Rutherford backscattering spectroscopy, scanning and high-resolution transmission electron microscopy, electron and x-ray diffraction, and x-ray photoelectron spectroscopy. We show that W nucleates and grows rapidly at localized sites on Ti during short WF 6 exposures~'6 s! at 445 °C at low partial pressurespWF6,0.2 Torr. Large amounts of F, up to '2.0310 atoms/cm corresponding to an average F/Ti ratio of 1.5 in a 27-nm-thick Ti layer, penetrate the Ti film, forming a solid solution and nonvolatile TiF3. The large stresses developed due to volume expansion during fluorination of the Ti layer result in local delamination at the W/Ti and the Ti/SiO 2 interfaces at low and high WF 6 exposures, respectively. WF 6 exposure atpWF6.0.35 results in the formation of a network of elongated microcracks in the W film which allow WF 6 to diffuse through and attack the underlying Ti, consuming the 27-nm-thick Ti film through the evolution of gaseous TiF 4. © 1999 American Institute of Physics. @S0021-8979 ~99!10303-7#",
"title": ""
},
{
"docid": "neg:1840445_4",
"text": "Compelling evidence indicates that the CRISPR-Cas system protects prokaryotes from viruses and other potential genome invaders. This adaptive prokaryotic immune system arises from the clustered regularly interspaced short palindromic repeats (CRISPRs) found in prokaryotic genomes, which harbor short invader-derived sequences, and the CRISPR-associated (Cas) protein-coding genes. Here, we have identified a CRISPR-Cas effector complex that is comprised of small invader-targeting RNAs from the CRISPR loci (termed prokaryotic silencing (psi)RNAs) and the RAMP module (or Cmr) Cas proteins. The psiRNA-Cmr protein complexes cleave complementary target RNAs at a fixed distance from the 3' end of the integral psiRNAs. In Pyrococcus furiosus, psiRNAs occur in two size forms that share a common 5' sequence tag but have distinct 3' ends that direct cleavage of a given target RNA at two distinct sites. Our results indicate that prokaryotes possess a unique RNA silencing system that functions by homology-dependent cleavage of invader RNAs.",
"title": ""
},
{
"docid": "neg:1840445_5",
"text": "In many cases, the topology of communcation systems can be abstracted and represented as graph. Graph theories and algorithms are useful in these situations. In this paper, we introduced an algorithm to enumerate all cycles in a graph. It can be applied on digraph or undirected graph. Multigraph can also be used on for this purpose. It can be used to enumerate given length cycles without enumerating all cycles. This algorithm is simple and easy to be implemented.",
"title": ""
},
{
"docid": "neg:1840445_6",
"text": "This paper addresses the task of finegrained opinion extraction – the identification of opinion-related entities: the opinion expressions, the opinion holders, and the targets of the opinions, and the relations between opinion expressions and their targets and holders. Most existing approaches tackle the extraction of opinion entities and opinion relations in a pipelined manner, where the interdependencies among different extraction stages are not captured. We propose a joint inference model that leverages knowledge from predictors that optimize subtasks of opinion extraction, and seeks a globally optimal solution. Experimental results demonstrate that our joint inference approach significantly outperforms traditional pipeline methods and baselines that tackle subtasks in isolation for the problem of opinion extraction.",
"title": ""
},
{
"docid": "neg:1840445_7",
"text": "BACKGROUND\nThe present study was designed to implement an interprofessional simulation-based education program for nursing students and evaluate the influence of this program on nursing students' attitudes toward interprofessional education and knowledge about operating room nursing.\n\n\nMETHODS\nNursing students were randomly assigned to either the interprofessional simulation-based education or traditional course group. A before-and-after study of nursing students' attitudes toward the program was conducted using the Readiness for Interprofessional Learning Scale. Responses to an open-ended question were categorized using thematic content analysis. Nursing students' knowledge about operating room nursing was measured.\n\n\nRESULTS\nNursing students from the interprofessional simulation-based education group showed statistically different responses to four of the nineteen questions in the Readiness for Interprofessional Learning Scale, reflecting a more positive attitude toward interprofessional learning. This was also supported by thematic content analysis of the open-ended responses. Furthermore, nursing students in the simulation-based education group had a significant improvement in knowledge about operating room nursing.\n\n\nCONCLUSIONS\nThe integrated course with interprofessional education and simulation provided a positive impact on undergraduate nursing students' perceptions toward interprofessional learning and knowledge about operating room nursing. Our study demonstrated that this course may be a valuable elective option for undergraduate nursing students in operating room nursing education.",
"title": ""
},
{
"docid": "neg:1840445_8",
"text": "We have developed a fast, perceptual method for selecting color scales for data visualization that takes advantage of our sensitivity to luminance variations in human faces. To do so, we conducted experiments in which we mapped various color scales onto the intensitiy values of a digitized photograph of a face and asked observers to rate each image. We found a very strong correlation between the perceived naturalness of the images and the degree to which the underlying color scales increased monotonically in luminance. Color scales that did not include a monotonically-increasing luminance component produced no positive rating scores. Since color scales with monotonic luminance profiles are widely recommended for visualizing continuous scalar data, a purely visual technique for identifying such color scales could be very useful, especially in situations where color calibration is not integrated into the visualization environment, such as over the Internet.",
"title": ""
},
{
"docid": "neg:1840445_9",
"text": "The present experiment was designed to test whether specific recordable changes in the neuromuscular system could be associated with specific alterations in soft- and hard-tissue morphology in the craniofacial region. The effect of experimentally induced neuromuscular changes on the craniofacial skeleton and dentition of eight rhesus monkeys was studied. The neuromuscular changes were triggered by complete nasal airway obstruction and the need for an oral airway. Alterations were also triggered 2 years later by removal of the obstruction and the return to nasal breathing. Changes in neuromuscular recruitment patterns resulted in changed function and posture of the mandible, tongue, and upper lip. There was considerable variation among the animals. Statistically significant morphologic effects of the induced changes were documented in several of the measured variables after the 2-year experimental period. The anterior face height increased more in the experimental animals than in the control animals; the occlusal and mandibular plane angles measured to the sella-nasion line increased; and anterior crossbites and malposition of teeth occurred. During the postexperimental period some of these changes were reversed. Alterations in soft-tissue morphology were also observed during both experimental periods. There was considerable variation in morphologic response among the animals. It was concluded that the marked individual variations in skeletal morphology and dentition resulting from the procedures were due to the variation in nature and degree of neuromuscular and soft-tissue adaptations in response to the altered function. The recorded neuromuscular recruitment patterns could not be directly related to specific changes in morphology.",
"title": ""
},
{
"docid": "neg:1840445_10",
"text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.",
"title": ""
},
{
"docid": "neg:1840445_11",
"text": "Touché proposes a novel Swept Frequency Capacitive Sensing technique that can not only detect a touch event, but also recognize complex configurations of the human hands and body. Such contextual information significantly enhances touch interaction in a broad range of applications, from conventional touchscreens to unique contexts and materials. For example, in our explorations we add touch and gesture sensitivity to the human body and liquids. We demonstrate the rich capabilities of Touché with five example setups from different application domains and conduct experimental studies that show gesture classification accuracies of 99% are achievable with our technology.",
"title": ""
},
{
"docid": "neg:1840445_12",
"text": "Within software engineering, requirements engineering starts from imprecise and vague user requirements descriptions and infers precise, formalized specifications. Techniques, such as interviewing by requirements engineers, are typically applied to identify the user's needs. We want to partially automate even this first step of requirements elicitation by methods of evolutionary computation. The idea is to enable users to specify their desired software by listing examples of behavioral descriptions. Users initially specify two lists of operation sequences, one with desired behaviors and one with forbidden behaviors. Then, we search for the appropriate formal software specification in the form of a deterministic finite automaton. We solve this problem known as grammatical inference with an active coevolutionary approach following Bongard and Lipson [2]. The coevolutionary process alternates between two phases: (A) additional training data is actively proposed by an evolutionary process and the user is interactively asked to label it; (B) appropriate automata are then evolved to solve this extended grammatical inference problem. Our approach leverages multi-objective evolution in both phases and outperforms the state-of-the-art technique [2] for input alphabet sizes of three and more, which are relevant to our problem domain of requirements specification.",
"title": ""
},
{
"docid": "neg:1840445_13",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "neg:1840445_14",
"text": "We examine the relation between executive compensation and corporate fraud. Executives at fraud firms have significantly larger equity-based compensation and greater financial incentives to commit fraud than do executives at industryand sizematched control firms. Executives at fraud firms also earn significantly more total compensation by exercising significantly larger fractions of their vested options than the control executives during the fraud years. Operating and stock performance measures suggest executives who commit corporate fraud attempt to offset declines in performance that would otherwise occur. Our results imply that optimal governance measures depend on the strength of executives’ financial incentives.",
"title": ""
},
{
"docid": "neg:1840445_15",
"text": "The incidence of clinically evident Curling's ulcer among 109 potentially salvageable severely burned patients was reviewed. These patients, who had greater than a 40 per cent body surface area burn, received one of these three treatment regimens: antacids hourly until autografting was complete, antacids hourly during the early postburn period followed by nutritional supplementation with Vivonex until autografting was complete or no antacids during the early postburn period but subsequent nutritional supplementation with Vivonex until autografting was complete. Clinically evident Curling's ulcer occurred in three patients. This incidence approximates the lowest reported among severely burned patients treated prophylactically with acid-reducing regimens to minimize clinically evident Curling's ulcer. In addition to its protective effect on Curling's ulcer, Vivonex, when used in combination with a high protein, high caloric diet, meets the caloric needs of the severely burned patient. Probably, Vivonex, which has a pH range of 4.5 to 5.4 protects against clinically evident Curling's ulcer by a dilutional alkalinization of gastric secretion.",
"title": ""
},
{
"docid": "neg:1840445_16",
"text": "Narita for their comments. Some of the results and ideas in this paper are similar to those in a working paper that I wrote in 2009, \"Bursting Bubbles: Consequences and Cures.\"",
"title": ""
},
{
"docid": "neg:1840445_17",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "neg:1840445_18",
"text": "Advertisement and Brand awareness plays an important role in brand building, brand recognition, brand loyalty and boost up the sales performance which is regarded as the foundation for brand development. To some degree advertisement and brand awareness can directly influence consumers’ buying behavior. The female consumers from IT industry have been taken as main consumers for the research purpose. The researcher seeks to inspect and investigate brand’s intention factors and consumer’s individual factors in influencing advertisement and its impact of brand awareness on fast moving consumer goods especially personal care products .The aim of the paper is to examine the advertising and its impact of brand awareness towards FMCG Products, on the other hand, to analyze the influence of advertising on personal care products among female consumers in IT industry and finally to study the impact of media on advertising & brand awareness. The prescribed survey were conducted in the form of questionnaire and found valid and reliable for this research. After evaluating some questions, better questionnaires were developed. Then the questionnaires were distributed among 200 female consumers with a response rate of 100%. We found that advertising has constantly a significant positive effect on brand awareness and consumers perceive the brand awareness with positive attitude. Findings depicts that advertising and brand awareness have strong positive influence and considerable relationship with purchase intention of the consumer. This research highlights that female consumers of personal care products in IT industry are more brand conscious and aware about their personal care products. Advertisement and brand awareness affects their purchase intention positively; also advertising media positively influences the brand awareness and purchase intention of the female consumers. The obtained data were then processed by Pearson correlation, multiple regression analysis and ANOVA. A Study On Advertising And Its Impact Of Brand Awareness On Fast Moving Consumer Goods With Reference To Personal Care Products In Chennai Paper ID IJIFR/ V2/ E9/ 068 Page No. 3325-3333 Subject Area Business Administration",
"title": ""
},
{
"docid": "neg:1840445_19",
"text": "The recent successes of deep learning have led to a wave of interest from non-experts. Gaining an understanding of this technology, however, is difficult. While the theory is important, it is also helpful for novices to develop an intuitive feel for the effect of different hyperparameters and structural variations. We describe TensorFlow Playground, an interactive, open sourced visualization that allows users to experiment via direct manipulation rather than coding, enabling them to quickly build an intuition about neural nets.",
"title": ""
}
] |
1840446 | Effective Botnet Detection Through Neural Networks on Convolutional Features | [
{
"docid": "pos:1840446_0",
"text": "HTTP is becoming the most preferred channel for command and control (C&C) communication of botnets. One of the main reasons is that it is very easy to hide the C&C traffic in the massive amount of browser generated Web traffic. However, detecting these HTTP-based C&C packets which constitute only a minuscule portion of the overall everyday HTTP traffic is a formidable task. In this paper, we present an anomaly detection based approach to detect HTTP-based C&C traffic using statistical features based on client generated HTTP request packets and DNS server generated response packets. We use three different unsupervised anomaly detection techniques to isolate suspicious communications that have a high probability of being part of a botnet's C&C communication. Results indicate that our method can achieve more than 90% detection rate while maintaining a reasonably low false positive rate.",
"title": ""
},
{
"docid": "pos:1840446_1",
"text": "In recent years, the botnet phenomenon is one of the most dangerous threat to Internet security, which supports a wide range of criminal activities, including distributed denial of service (DDoS) attacks, click fraud, phishing, malware distribution, spam emails, etc. An increasing number of botnets use Domain Generation Algorithms (DGAs) to avoid detection and exclusion by the traditional methods. By dynamically and frequently generating a large number of random domain names for candidate command and control (C&C) server, botnet can be still survive even when a C&C server domain is identified and taken down. This paper presents a novel method to detect DGA botnets using Collaborative Filtering and Density-Based Clustering. We propose a combination of clustering and classification algorithm that relies on the similarity in characteristic distribution of domain names to remove noise and group similar domains. Collaborative Filtering (CF) technique is applied to find out bots in each botnet, help finding out offline malwares infected-machine. We implemented our prototype system, carried out the analysis of a huge amount of DNS traffic log of Viettel Group and obtain positive results.",
"title": ""
}
] | [
{
"docid": "neg:1840446_0",
"text": "In this work, we propose a generalized product of experts (gPoE) framework for combining the predictions of multiple probabilistic models. We identify four desirable properties that are important for scalability, expressiveness and robustness, when learning and inferring with a combination of multiple models. Through analysis and experiments, we show that gPoE of Gaussian processes (GP) have these qualities, while no other existing combination schemes satisfy all of them at the same time. The resulting GP-gPoE is highly scalable as individual GP experts can be independently learned in parallel; very expressive as the way experts are combined depends on the input rather than fixed; the combined prediction is still a valid probabilistic model with natural interpretation; and finally robust to unreliable predictions from individual experts.",
"title": ""
},
{
"docid": "neg:1840446_1",
"text": "Abstract Traditionally, firewalls and access control have been the most important components used in order to secure servers, hosts and computer networks. Today, intrusion detection systems (IDSs) are gaining attention and the usage of these systems is increasing. This thesis covers commercial IDSs and the future direction of these systems. A model and taxonomy for IDSs and the technologies behind intrusion detection is presented. Today, many problems exist that cripple the usage of intrusion detection systems. The decreasing confidence in the alerts generated by IDSs is directly related to serious problems like false positives. By studying IDS technologies and analyzing interviews conducted with security departments at Swedish banks, this thesis identifies the major problems within IDSs today. The identified problems, together with recent IDS research reports published at the RAID 2002 symposium, are used to recommend the future direction of commercial intrusion detection systems. Intrusion Detection Systems – Technologies, Weaknesses and Trends",
"title": ""
},
{
"docid": "neg:1840446_2",
"text": "Social media has quickly risen to prominence as a news source, yet lingering doubts remain about its ability to spread rumor and misinformation. Systematically studying this phenomenon, however, has been difficult due to the need to collect large-scale, unbiased data along with in-situ judgements of its accuracy. In this paper we present CREDBANK, a corpus designed to bridge this gap by systematically combining machine and human computation. Specifically, CREDBANK is a corpus of tweets, topics, events and associated human credibility judgements. It is based on the real-time tracking of more than 1 billion streaming tweets over a period of more than three months, computational summarizations of those tweets, and intelligent routings of the tweet streams to human annotators—within a few hours of those events unfolding on Twitter. In total CREDBANK comprises more than 60 million tweets grouped into 1049 real-world events, each annotated by 30 human annotators. As an example, with CREDBANK one can quickly calculate that roughly 24% of the events in the global tweet stream are not perceived as credible. We have made CREDBANK publicly available, and hope it will enable new research questions related to online information credibility in fields such as social science, data mining and health.",
"title": ""
},
{
"docid": "neg:1840446_3",
"text": "This paper gives an overview of MOSFET mismatch effects that form a performance/yield limitation for many designs. After a general description of (mis)matching, a comparison over past and future process generations is presented. The application of the matching model in CAD and analog circuit design is discussed. Mismatch effects gain importance as critical dimensions and CMOS power supply voltages decrease.",
"title": ""
},
{
"docid": "neg:1840446_4",
"text": "We propose a new technique for training deep neural networks (DNNs) as data-driven feature front-ends for large vocabulary continuous speech recognition (LVCSR) in low resource settings. To circumvent the lack of sufficient training data for acoustic modeling in these scenarios, we use transcribed multilingual data and semi-supervised training to build the proposed feature front-ends. In our experiments, the proposed features provide an absolute improvement of 16% in a low-resource LVCSR setting with only one hour of in-domain training data. While close to three-fourths of these gains come from DNN-based features, the remaining are from semi-supervised training.",
"title": ""
},
{
"docid": "neg:1840446_5",
"text": "This paper addresses the problem of road scene segmentation in conventional RGB images by exploiting recent advances in semantic segmentation via convolutional neural networks (CNNs). Segmentation networks are very large and do not currently run at interactive frame rates. To make this technique applicable to robotics we propose several architecture refinements that provide the best trade-off between segmentation quality and runtime. This is achieved by a new mapping between classes and filters at the expansion side of the network. The network is trained end-to-end and yields precise road/lane predictions at the original input resolution in roughly 50ms. Compared to the state of the art, the network achieves top accuracies on the KITTI dataset for road and lane segmentation while providing a 20× speed-up. We demonstrate that the improved efficiency is not due to the road segmentation task. Also on segmentation datasets with larger scene complexity, the accuracy does not suffer from the large speed-up.",
"title": ""
},
{
"docid": "neg:1840446_6",
"text": "In this paper, a 2×2 broadside array of 3D printed half-wave dipole antennas is presented. The array design leverages direct digital manufacturing (DDM) technology to realize a shaped substrate structure that is used to control the array beamwidth. The non-planar substrate allows the element spacing to be changed without affecting the length of the feed network or the distance to the underlying ground plane. The 4-element array has a broadside gain that varies between 7.0–8.5 dBi depending on the out-of-plane angle of the substrate. Acrylonitrile Butadiene Styrene (ABS) is deposited using fused deposition modeling to form the array structure (relative permittivity of 2.7 and loss tangent of 0.008) and Dupont CB028 silver paste is used to form the conductive traces.",
"title": ""
},
{
"docid": "neg:1840446_7",
"text": "Mordeson, J.N., Fuzzy line graphs, Pattern Recognition Letters 14 (1993) 381 384. The notion of a fuzzy line graph of a fuzzy graph is introduced. We give a necessary and sufficient condition for a fuzzy graph to be isomorphic to its corresponding fuzzy line graph. We examine when an isomorphism between two fuzzy graphs follows from an isomorphism of their corresponding fuzzy line graphs. We give a necessary and sufficient condition for a fuzzy graph to be the fuzzy line graph of some fuzzy graph.",
"title": ""
},
{
"docid": "neg:1840446_8",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "neg:1840446_9",
"text": "The iSTAR Micro Air Vehicle (MAV) is a unique 9-inch diameter ducted air vehicle weighing approximately 4 lb. The configuration consists of a ducted fan with control vanes at the duct exit plane. This VTOL aircraft not only hovers, but it can also fly at high forward speed by pitching over to a near horizontal attitude. The duct both increases propulsion efficiency and produces lift in horizontal flight, similar to a conventional planar wing. The vehicle is controlled using a rate based control system with piezo-electric gyroscopes. The Flight Control Computer (FCC) processes the pilot’s commands and the rate data from the gyroscopes to stabilize and control the vehicle. First flight of the iSTAR MAV was successfully accomplished in October 2000. Flight at high pitch angles and high speed took place in November 2000. This paper describes the vehicle, control system, and ground and flight-test results . Presented at the American Helicopter Society 57 Annual forum, Washington, DC, May 9-11, 2001. Copyright 2001 by the American Helicopter Society International, Inc. All rights reserved. Introduction The Micro Craft Inc. iSTAR is a Vertical Take-Off and Landing air vehicle (Figure 1) utilizing ducted fan technology to hover and fly at high forward speed. The duct both increases the propulsion efficiency and provides direct lift in forward flight similar to a conventional planar wing. However, there are many other benefits inherent in the iSTAR design. In terms of safety, the duct protects personnel from exposure to the propeller. The vehicle also has a very small footprint, essentially a circle equal to the diameter of the duct. This is beneficial for stowing, transporting, and in operations where space is critical, such as on board ships. The simplicity of the design is another major benefit. The absence of complex mechanical systems inherent in other VTOL designs (e.g., gearboxes, articulating blades, and counter-rotating propellers) benefits both reliability and cost. Figure 1: iSTAR Micro Air Vehicle The Micro Craft iSTAR VTOL aircraft is able to both hover and fly at high speed by pitching over towards a horizontal attitude (Figure 2). Although many aircraft in history have utilized ducted fans, most of these did not attempt to transition to high-speed forward flight. One of the few aircraft that did successfully transition was the Bell X-22 (Reference 1), first flown in 1965. The X-22, consisted of a fuselage and four ducted fans that rotated relative to the fuselage to transition the vehicle forward. The X-22 differed from the iSTAR in that its fuselage remained nearly level in forward flight, and the ducts rotated relative to the fuselage. Also planar tandem wings, not the ducts themselves, generated a large portion of the lift in forward flight. 1 Micro Craft Inc. is a division of Allied Aerospace Industry Incorporated (AAII) One of the first aircraft using an annular wing for direct lift was the French Coleoptère (Reference 1) built in the late 1950s. This vehicle successfully completed transition from hovering flight using an annular wing, however a ducted propeller was not used. Instead, a single jet engine was mounted inside the center-body for propulsion. Control was achieved by deflecting vanes inside the jet exhaust, with small external fins attached to the duct, and also with deployable strakes on the nose. Figure 2: Hover & flight at forward speed Less well-known are the General Dynamics ducted-fan Unmanned Air Vehicles, which were developed and flown starting in 1960 with the PEEK (Reference 1) aircraft. These vehicles, a precursor to the Micro Craft iSTAR, demonstrated stable hover and low speed flight in free-flight tests, and transition to forward flight in tethered ground tests. In 1999, Micro Craft acquired the patent, improved and miniaturized the design, and manufactured two 9-inch diameter flight test vehicles under DARPA funding (Reference 1). Working in conjunction with BAE systems (formerly Lockheed Sanders) and the Army/NASA Rotorcraft Division, these vehicles have recently completed a proof-ofconcept flight test program and have been demonstrated to DARPA and the US Army. Military applications of the iSTAR include intelligence, surveillance, target acquisition, and reconnaissance. Commercial applications include border patrol, bridge inspection, and police surveillance. Vehicle Description The iSTAR is composed of four major assemblies as shown in Figure 3: (1) the upper center-body, (2) the lower center body, (3) the duct, and (4) the landing ring. The majority of the vehicle’s structure is composed of Kevlar composite material resulting in a very strong and lightweight structure. Kevlar also lacks the brittleness common to other composite materials. Components that are not composite include the engine bulkhead (aluminum) and the landing ring (steel wire). The four major assemblies are described below. The upper center-body (UCB) is cylindrical in shape and contains the engine, engine controls, propeller, and payload. Three sets of hollow struts support the UCB and pass fuel and wiring to the duct. The propulsion Hover Low Speed High Speed system is a commercial-off-the-shelf (COTS) OS-32 SX single cylinder engine. This engine develops 1.2 hp and weighs approximately 250 grams (~0.5 lb.). Fuel consists of a mixture of alcohol, nitro-methane, and oil. The fixed-pitch propeller is attached directly to the engine shaft (without a gearbox). Starting the engine is accomplished by inserting a cylindrical shaft with an attached gear into the upper center-body and meshing it with a gear fit onto the propeller shaft (see Figure 4). The shaft is rotated using an off-board electric starter (Micro Craft is also investigating on-board starting systems). Figure 3: iSTAR configuration A micro video camera is mounted inside the nose cone, which is easily removable to accommodate modular payloads. The entire UCB can be removed in less than five minutes by removing eight screws securing the struts, and then disconnecting one fuel line and one electrical connector. Figure 4: Engine starting The lower center-body (LCB) is cylindrical in shape and is supported by eight stators. The sensor board is housed in the LCB, and contains three piezo-electric gyroscopes, three accelerometers, a voltage regulator, and amplifiers. The sensor signals are routed to the processor board in the duct via wires integrated into the stators. The duct is nine inches in diameter and contains a significant amount of volume for packaging. The fuel tank, flight control Computer (FCC), voltage regulator, batteries, servos, and receiver are all housed inside the duct. Fuel is contained in the leading edge of the duct. This tank is non-structural, and easily removable. It is attached to the duct with tape. Internal to the duct are eight fixed stators. The angle of the stators is set so that they produce an aerodynamic rolling moment countering the torque of the engine. Control vanes are attached to the trailing edge of the stators, providing roll, yaw, and pitch control. Four servos mounted inside the duct actuate the control vanes. Many different landing systems have been studied in the past. These trade studies have identified the landing ring as superior overall to other systems. The landing ring stabilizes the vehicle in close proximity to the ground by providing a restoring moment in dynamic situations. For example, if the vehicle were translating slowly and contacted the ground, the ring would pitch the vehicle upright. The ring also reduces blockage of the duct during landing and take-off by raising the vehicle above the ground. Blocking the duct can lead to reduced thrust and control power. Landing feet have also been considered because of their reduced weight. However, landing ‘feet’ lack the self-stabilizing characteristics of the ring in dynamic situations and tend to ‘catch’ on uneven surfaces. Electronics and Control System The Flight Control Computer (FCC) is housed in the duct (Figure 5). The computer processes the sensor output and pilot commands and generates pulse width modulated (PWM) signals to drive the servos. Pilot commands are generated using two conventional joysticks. The left joystick controls throttle position and heading. The right joystick controls pitch and yaw rate. The aircraft axis system is defined such that the longitudinal axis is coaxial with the engine shaft. Therefore, in hover the pitch attitude is 90 degrees and rolling the aircraft produces a heading change. Dedicated servos are used for pitch and yaw control. However, all control vanes are used for roll control (four quadrant roll control). The FCC provides the appropriate mixing for each servo. In each axis, the control system architecture consists of a conventional Proportional-Integral-Derivative (PID) controller with single-input and single-output. Initially, an attitude-based control system was desired, however Upper Center-body Fuel Tank Fixed Stator Control Vane Actuator Landing Ring Lower Center-body Duct Engine and Controls Prop/Fan Support struts due to the lack of acceleration information and the high gyroscope drift rates, accurate attitudes could not be calculated. For this reason, a rate system was ultimately implemented. Three Murata micro piezo-electric gyroscopes provide rates about all three axes. These gyroscopes are approximately 0.6”x0.3”x0.15” in size and weigh 1 gram each (Figure 6). Figure 5: Flight Control Computer Four COTS servos are located in the duct to actuate the control surfaces. Each servo weighs 28 grams and is 1.3”x1.3”x0.6” in size. Relative to typical UAV servos, they can generate high rates, but have low bandwidth. Bandwidth is defined by how high a frequency the servo can accurately follow an input signal. For all servos, the output lags behind the input and the signal degrades in magnitude as the frequency increases. At low frequency, the iSTAR MAV servo output signal lags by approximately 30°,",
"title": ""
},
{
"docid": "neg:1840446_10",
"text": "Deep learning is a popular technique in modern online and offline services. Deep neural network based learning systems have made groundbreaking progress in model size, training and inference speed, and expressive power in recent years, but to tailor the model to specific problems and exploit data and problem structures is still an ongoing research topic. We look into two types of deep ‘‘multi-’’ objective learning problems: multi-view learning, referring to learning from data represented by multiple distinct feature sets, and multi-label learning, referring to learning from data instances belonging to multiple class labels that are not mutually exclusive. Research endeavors of both problems attempt to base on existing successful deep architectures and make changes of layers, regularization terms or even build hybrid systems to meet the problem constraints. In this report we first explain the original artificial neural network (ANN) with the backpropagation learning algorithm, and also its deep variants, e.g. deep belief network (DBN), convolutional neural network (CNN) and recurrent neural network (RNN). Next we present a survey of some multi-view and multi-label learning frameworks based on deep neural networks. At last we introduce some applications of deep multi-view and multi-label learning, including e-commerce item categorization, deep semantic hashing, dense image captioning, and our preliminary work on x-ray scattering image classification.",
"title": ""
},
{
"docid": "neg:1840446_11",
"text": "High-school grades are often viewed as an unreliable criterion for college admissions, owing to differences in grading standards across high schools, while standardized tests are seen as methodologically rigorous, providing a more uniform and valid yardstick for assessing student ability and achievement. The present study challenges that conventional view. The study finds that high-school grade point average (HSGPA) is consistently the best predictor not only of freshman grades in college, the outcome indicator most often employed in predictive-validity studies, but of four-year college outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003), demonstrated that HSGPA in college-preparatory courses was the best predictor of freshman grades for a sample of almost 80,000 students admitted to the University of California. Because freshman grades provide only a short-term indicator of college performance, the present study tracked four-year college outcomes, including cumulative college grades and graduation, for the same sample in order to examine the relative contribution of high-school record and standardized tests in predicting longerterm college performance. Key findings are: (1) HSGPA is consistently the strongest predictor of four-year college outcomes for all academic disciplines, campuses and freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated with HSGPA increases after the freshman year, accounting for a greater proportion of variance in cumulative fourth-year than first-year college grades; and (3) as an admissions criterion, HSGPA has less adverse impact than standardized tests on disadvantaged and underrepresented minority students. The paper concludes with a discussion of the implications of these findings for admissions policy and argues for greater emphasis on the high-school record, and a corresponding de-emphasis on standardized tests, in college admissions. * The study was supported by a grant from the Koret Foundation. Geiser and Santelices: VALIDITY OF HIGH-SCHOOL GRADES 2 CSHE Research & Occasional Paper Series Introduction and Policy Context This study examines the relative contribution of high-school grades and standardized admissions tests in predicting students’ long-term performance in college, including cumulative grade-point average and college graduation. The relative emphasis on grades vs. tests as admissions criteria has become increasingly visible as a policy issue at selective colleges and universities, particularly in states such as Texas and California, where affirmative action has been challenged or eliminated. Compared to high-school gradepoint average (HSGPA), scores on standardized admissions tests such as the SAT I are much more closely correlated with students’ socioeconomic background characteristics. As shown in Table 1, for example, among our study sample of almost 80,000 University of California (UC) freshmen, SAT I verbal and math scores exhibit a strong, positive relationship with measures of socioeconomic status (SES) such as family income, parents’ education and the academic ranking of a student’s high school, whereas HSGPA is only weakly associated with such measures. As a result, standardized admissions tests tend to have greater adverse impact than HSGPA on underrepresented minority students, who come disproportionately from disadvantaged backgrounds. The extent of the difference can be seen by rank-ordering students on both standardized tests and highschool grades and comparing the distributions. Rank-ordering students by test scores produces much sharper racial/ethnic stratification than when the same students are ranked by HSGPA, as shown in Table 2. It should be borne in mind the UC sample shown here represents a highly select group of students, drawn from the top 12.5% of California high-school graduates under the provisions of the state’s Master Plan for Higher Education. Overall, under-represented minority students account for about 17 percent of that group, although their percentage varies considerably across different HSGPA and SAT levels within the sample. When students are ranked by HSGPA, underrepresented minorities account for 28 percent of students in the bottom Family Parents' School API Income Education Decile SAT I verbal 0.32 0.39 0.32 SAT I math 0.24 0.32 0.39 HSGPA 0.04 0.06 0.01 Source: UC Corporate Student System data on 79,785 first-time freshmen entering between Fall 1996 and Fall 1999. Correlation of Admissions Factors with SES Table 1",
"title": ""
},
{
"docid": "neg:1840446_12",
"text": "Fingerprint-spoofing attack often occurs when imposters gain access illegally by using artificial fingerprints, which are made of common fingerprint materials, such as silicon, latex, etc. Thus, to protect our privacy, many fingerprint liveness detection methods are put forward to discriminate fake or true fingerprint. Current work on liveness detection for fingerprint images is focused on the construction of complex handcrafted features, but these methods normally destroy or lose spatial information between pixels. Different from existing methods, convolutional neural network (CNN) can generate high-level semantic representations by learning and concatenating low-level edge and shape features from a large amount of labeled data. Thus, CNN is explored to solve the above problem and discriminate true fingerprints from fake ones in this paper. To reduce the redundant information and extract the most distinct features, ROI and PCA operations are performed for learned features of convolutional layer or pooling layer. After that, the extracted features are fed into SVM classifier. Experimental results based on the LivDet (2013) and the LivDet (2011) datasets, which are captured by using different fingerprint materials, indicate that the classification performance of our proposed method is both efficient and convenient compared with the other previous methods.",
"title": ""
},
{
"docid": "neg:1840446_13",
"text": "In this paper we describe a deep network architecture that maps visual input to control actions for a robotic planar reaching task with 100% reliability in real-world trials. Our network is trained in simulation and fine-tuned with a limited number of real-world images. The policy search is guided by a kinematics-based controller (K-GPS), which works more effectively and efficiently than ε-Greedy. A critical insight in our system is the need to introduce a bottleneck in the network between the perception and control networks, and to initially train these networks independently.",
"title": ""
},
{
"docid": "neg:1840446_14",
"text": "This paper presents a semi-supervised learning framework for a customized semantic segmentation task using multiview image streams. A key challenge of the customized task lies in the limited accessibility of the labeled data due to the requirement of prohibitive manual annotation effort. We hypothesize that it is possible to leverage multiview image streams that are linked through the underlying 3D geometry, which can provide an additional supervisionary signal to train a segmentation model. We formulate a new cross-supervision method using a shape belief transfer—the segmentation belief in one image is used to predict that of the other image through epipolar geometry analogous to shape-from-silhouette. The shape belief transfer provides the upper and lower bounds of the segmentation for the unlabeled data where its gap approaches asymptotically to zero as the number of the labeled views increases. We integrate this theory to design a novel network that is agnostic to camera calibration, network model, and semantic category and bypasses the intermediate process of suboptimal 3D reconstruction. We validate this network by recognizing a customized semantic category per pixel from realworld visual data including non-human species and a subject of interest in social videos where attaining large-scale annotation data is infeasible.",
"title": ""
},
{
"docid": "neg:1840446_15",
"text": "Most proteins must fold into defined three-dimensional structures to gain functional activity. But in the cellular environment, newly synthesized proteins are at great risk of aberrant folding and aggregation, potentially forming toxic species. To avoid these dangers, cells invest in a complex network of molecular chaperones, which use ingenious mechanisms to prevent aggregation and promote efficient folding. Because protein molecules are highly dynamic, constant chaperone surveillance is required to ensure protein homeostasis (proteostasis). Recent advances suggest that an age-related decline in proteostasis capacity allows the manifestation of various protein-aggregation diseases, including Alzheimer's disease and Parkinson's disease. Interventions in these and numerous other pathological states may spring from a detailed understanding of the pathways underlying proteome maintenance.",
"title": ""
},
{
"docid": "neg:1840446_16",
"text": "Question answering (Q&A) communities have been gaining popularity in the past few years. The success of such sites depends mainly on the contribution of a small number of expert users who provide a significant portion of the helpful answers, and so identifying users that have the potential of becoming strong contributers is an important task for owners of such communities.\n We present a study of the popular Q&A website StackOverflow (SO), in which users ask and answer questions about software development, algorithms, math and other technical topics. The dataset includes information on 3.5 million questions and 6.9 million answers created by 1.3 million users in the years 2008--2012. Participation in activities on the site (such as asking and answering questions) earns users reputation, which is an indicator of the value of that user to the site.\n We describe an analysis of the SO reputation system, and the participation patterns of high and low reputation users. The contributions of very high reputation users to the site indicate that they are the primary source of answers, and especially of high quality answers. Interestingly, we find that while the majority of questions on the site are asked by low reputation users, on average a high reputation user asks more questions than a user with low reputation. We consider a number of graph analysis methods for detecting influential and anomalous users in the underlying user interaction network, and find they are effective in detecting extreme behaviors such as those of spam users. Lastly, we show an application of our analysis: by considering user contributions over first months of activity on the site, we predict who will become influential long-term contributors.",
"title": ""
},
{
"docid": "neg:1840446_17",
"text": "The feature extraction stage of speech recognition is important historically and is the subject of much current research, particularly to promote robustness to acoustic disturbances such as additive noise and reverberation. Biologically inspired and biologically related approaches are an important subset of feature extraction methods for ASR.",
"title": ""
},
{
"docid": "neg:1840446_18",
"text": "We consider the task of automated estimation of facial expression intensity. This involves estimation of multiple output variables (facial action units — AUs) that are structurally dependent. Their structure arises from statistically induced co-occurrence patterns of AU intensity levels. Modeling this structure is critical for improving the estimation performance; however, this performance is bounded by the quality of the input features extracted from face images. The goal of this paper is to model these structures and estimate complex feature representations simultaneously by combining conditional random field (CRF) encoded AU dependencies with deep learning. To this end, we propose a novel Copula CNN deep learning approach for modeling multivariate ordinal variables. Our model accounts for ordinal structure in output variables and their non-linear dependencies via copula functions modeled as cliques of a CRF. These are jointly optimized with deep CNN feature encoding layers using a newly introduced balanced batch iterative training algorithm. We demonstrate the effectiveness of our approach on the task of AU intensity estimation on two benchmark datasets. We show that joint learning of the deep features and the target output structure results in significant performance gains compared to existing deep structured models for analysis of facial expressions.",
"title": ""
},
{
"docid": "neg:1840446_19",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] |
1840447 | A 10-Bit 0.5 V 100 KS/S SAR ADC with a New rail-to-rail Comparator for Energy Limited Applications | [
{
"docid": "pos:1840447_0",
"text": "A novel switched-current successive approximation ADC is presented in this paper with high speed and low power consumption. The proposed ADC contains a new high-accuracy and power-e±cient switched-current S/H circuit and a speed-improved current comparator. Designed and simulated in a 0:18m CMOS process, this 8-bit ADC achieves 46.23 dB SNDR at 1.23 MS/s consuming 73:19 W under 1.2 V voltage supply, resulting in an ENOB of 7.38-bit and an FOM of 0.357 pJ/Conv.-step.",
"title": ""
},
{
"docid": "pos:1840447_1",
"text": "The matching properties of the threshold voltage, substrate factor and current factor of MOS transistors have been analysed and measured. Improvements of the existing theory are given, as well as extensions for long distance matching and rotation of devices. The matching results have been verified by measurements and calculations on a band-gap reference circuit.",
"title": ""
}
] | [
{
"docid": "neg:1840447_0",
"text": "Different works have shown that the combination of multiple loss functions is beneficial when training deep neural networks for a variety of prediction tasks. Generally, such multi-loss approaches are implemented via a weighted multi-loss objective function in which each term encodes a different desired inference criterion. The importance of each term is often set using empirically tuned hyper-parameters. In this work, we analyze the importance of the relative weighting between the different terms of a multi-loss function and propose to leverage the model’s uncertainty with respect to each loss as an automatically learned weighting parameter. We consider the application of colon gland analysis from histopathology images for which various multi-loss functions have been proposed. We show improvements in classification and segmentation accuracy when using the proposed uncertainty driven multi-loss function.",
"title": ""
},
{
"docid": "neg:1840447_1",
"text": "Feeling emotion is a critical characteristic to distinguish people from machines. Among all the multi-modal resources for emotion detection, textual datasets are those containing the least additional information in addition to semantics, and hence are adopted widely for testing the developed systems. However, most of the textual emotional datasets consist of emotion labels of only individual words, sentences or documents, which makes it challenging to discuss the contextual flow of emotions. In this paper, we introduce EmotionLines, the first dataset with emotions labeling on all utterances in each dialogue only based on their textual content. Dialogues in EmotionLines are collected from Friends TV scripts and private Facebook messenger dialogues. Then one of seven emotions, six Ekman’s basic emotions plus the neutral emotion, is labeled on each utterance by 5 Amazon MTurkers. A total of 29,245 utterances from 2,000 dialogues are labeled in EmotionLines. We also provide several strong baselines for emotion detection models on EmotionLines in this paper.",
"title": ""
},
{
"docid": "neg:1840447_2",
"text": "We describe two general approaches to creating document-level maps of science. To create a local map one defines and directly maps a sample of data, such as all literature published in a set of information science journals. To create a global map of a research field one maps ‘all of science’ and then locates a literature sample within that full context. We provide a deductive argument that global mapping should create more accurate partitions of a research field than local mapping, followed by practical reasons why this may not be so. The field of information science is then mapped at the document level using both local and global methods to provide a case illustration of the differences between the methods. Textual coherence is used to assess the accuracies of both maps. We find that document clusters in the global map have significantly higher coherence than those in the local map, and that the global map provides unique insights into the field of information science that cannot be discerned from the local map. Specifically, we show that information science and computer science have a large interface and that computer science is the more progressive discipline at that interface. We also show that research communities in temporally linked threads have a much higher coherence than isolated communities, and that this feature can be used to predict which threads will persist into a subsequent year. Methods that could increase the accuracy of both local and global maps in the future are also discussed.",
"title": ""
},
{
"docid": "neg:1840447_3",
"text": "A current-mode dc-dc converter with an on-chip current sensor is presented in this letter. The current sensor has significant improvement on the current-sensing speed. The sensing ratio of the current sensor has low sensitivity to the variation of the process, voltage, temperature and loading. The current sensor combines the sensed inductor current signal with the compensation ramp signal and the output of the error amplifier smoothly. The settling time of the current sensor is less than 10 ns. In the current-mode dc-dc converter application, the differential output of the current sensor can be directly sent to the pulse-width modulation comparator. With the proposed current sensor, the dc-dc converter could realize a low duty cycle with a high switching frequency. The dc-dc converter has been fabricated by CSMC 0.5-μm 5-V CMOS process with a die size of 2.25 mm 2. Experimental results show that the current-mode converter can achieve a duty cycle down to 0.11 with a switching frequency up to 4 MHz. The measured transient response time is less than 6 μs as the load current changes between 50 and 600 mA, rapidly.",
"title": ""
},
{
"docid": "neg:1840447_4",
"text": "UPON Lite focuses on users, typically domain experts without ontology expertise, minimizing the role of ontology engineers.",
"title": ""
},
{
"docid": "neg:1840447_5",
"text": "This paper presents a new solution for filtering current harmonics in three-phase four-wire networks. The original four-branch star (FBS) filter topology presented in this paper is characterized by a particular layout of single-phase inductances and capacitors, without using any transformer or special electromagnetic device. Via this layout, a power filter, with two different and simultaneous resonance frequencies and sequences, is achieved-one frequency for positive-/negative-sequence and another one for zero-sequence components. This filter topology can work either as a passive filter, when only passive components are employed, or as a hybrid filter, when its behavior is improved by integrating a power converter into the filter structure. The paper analyzes the proposed topology, and derives fundamental concepts about the control of the resulting hybrid power filter. From this analysis, a specific implementation of a three-phase four-wire hybrid power filter is presented as an illustrative application of the filtering topology. An extensive evaluation using simulation and experimental results from a DSP-based laboratory prototype is conducted in order to verify and validate the good performance achieved by the proposed FBS passive/hybrid power filter.",
"title": ""
},
{
"docid": "neg:1840447_6",
"text": "Software vulnerabilities are the root cause of a wide range of attacks. Existing vulnerability scanning tools are able to produce a set of suspects. However, they often suffer from a high false positive rate. Convicting a suspect and vindicating false positives are mostly a highly demanding manual process, requiring a certain level of understanding of the software. This limitation significantly thwarts the application of these tools by system administrators or regular users who are concerned about security but lack of understanding of, or even access to, the source code. It is often the case that even developers are reluctant to inspect/fix these numerous suspects unless they are convicted by evidence. In this paper, we propose a lightweight dynamic approach which generates evidence for various security vulnerabilities in software, with the goal of relieving the manual procedure. It is based on data lineage tracing, a technique that associates each execution point precisely with a set of relevant input values. These input values can be mutated by an offline analysis to generate exploits. We overcome the efficiency challenge by using Binary Decision Diagrams (BDD). Our tool successfully generates exploits for all the known vulnerabilities we studied. We also use it to uncover a number of new vulnerabilities, proved by evidence.",
"title": ""
},
{
"docid": "neg:1840447_7",
"text": "To what extent do we share feelings with others? Neuroimaging investigations of the neural mechanisms involved in the perception of pain in others may cast light on one basic component of human empathy, the interpersonal sharing of affect. In this fMRI study, participants were shown a series of still photographs of hands and feet in situations that are likely to cause pain, and a matched set of control photographs without any painful events. They were asked to assess on-line the level of pain experienced by the person in the photographs. The results demonstrated that perceiving and assessing painful situations in others was associated with significant bilateral changes in activity in several regions notably, the anterior cingulate, the anterior insula, the cerebellum, and to a lesser extent the thalamus. These regions are known to play a significant role in pain processing. Finally, the activity in the anterior cingulate was strongly correlated with the participants' ratings of the others' pain, suggesting that the activity of this brain region is modulated according to subjects' reactivity to the pain of others. Our findings suggest that there is a partial cerebral commonality between perceiving pain in another individual and experiencing it oneself. This study adds to our understanding of the neurological mechanisms implicated in intersubjectivity and human empathy.",
"title": ""
},
{
"docid": "neg:1840447_8",
"text": "Life expectancy in most countries has been increasing continually over the several few decades thanks to significant improvements in medicine, public health, as well as personal and environmental hygiene. However, increased life expectancy combined with falling birth rates are expected to engender a large aging demographic in the near future that would impose significant burdens on the socio-economic structure of these countries. Therefore, it is essential to develop cost-effective, easy-to-use systems for the sake of elderly healthcare and well-being. Remote health monitoring, based on non-invasive and wearable sensors, actuators and modern communication and information technologies offers an efficient and cost-effective solution that allows the elderly to continue to live in their comfortable home environment instead of expensive healthcare facilities. These systems will also allow healthcare personnel to monitor important physiological signs of their patients in real time, assess health conditions and provide feedback from distant facilities. In this paper, we have presented and compared several low-cost and non-invasive health and activity monitoring systems that were reported in recent years. A survey on textile-based sensors that can potentially be used in wearable systems is also presented. Finally, compatibility of several communication technologies as well as future perspectives and research challenges in remote monitoring systems will be discussed.",
"title": ""
},
{
"docid": "neg:1840447_9",
"text": "The European honey bee exploits floral resources efficiently and may therefore compete with solitary wild bees. Hence, conservationists and bee keepers are debating about the consequences of beekeeping for the conservation of wild bees in nature reserves. We observed flower-visiting bees on flowers of Calluna vulgaris in sites differing in the distance to the next honey-bee hive and in sites with hives present and absent in the Lüneburger Heath, Germany. Additionally, we counted wild bee ground nests in sites that differ in their distance to the next hive and wild bee stem nests and stem-nesting bee species in sites with hives present and absent. We did not observe fewer honey bees or higher wild bee flower visits in sites with different distances to the next hive (up to 1,229 m). However, wild bees visited fewer flowers and honey bee visits increased in sites containing honey-bee hives and in sites containing honey-bee hives we found fewer stem-nesting bee species. The reproductive success, measured as number of nests, was not affected by distance to honey-bee hives or their presence but by availability and characteristics of nesting resources. Our results suggest that beekeeping in the Lüneburg Heath can affect the conservation of stem-nesting bee species richness but not the overall reproduction either of stem-nesting or of ground-nesting bees. Future experiments need control sites with larger distances than 500 m to hives. Until more information is available, conservation efforts should forgo to enhance honey bee stocking rates but enhance the availability of nesting resources.",
"title": ""
},
{
"docid": "neg:1840447_10",
"text": "The computer vision community has reached a point when it can start considering high-level reasoning tasks such as the \"communicative intents\" of images, or in what light an image portrays its subject. For example, an image might imply that a politician is competent, trustworthy, or energetic. We explore a variety of features for predicting these communicative intents. We study a number of facial expressions and body poses as cues for the implied nuances of the politician's personality. We also examine how the setting of an image (e.g. kitchen or hospital) influences the audience's perception of the portrayed politician. Finally, we improve the performance of an existing approach on this problem, by learning intermediate cues using convolutional neural networks. We show state of the art results on the Visual Persuasion dataset of Joo et al. [11].",
"title": ""
},
{
"docid": "neg:1840447_11",
"text": "Nanoscience emerged in the late 1980s and is developed and applied in China since the middle of the 1990s. Although nanotechnologies have been less developed in agronomy than other disciplines, due to less investment, nanotechnologies have the potential to improve agricultural production. Here, we review more than 200 reports involving nanoscience in agriculture, livestock, and aquaculture. The major points are as follows: (1) nanotechnologies used for seeds and water improved plant germination, growth, yield, and quality. (2) Nanotechnologies could increase the storage period for vegetables and fruits. (3) For livestock and poultry breeding, nanotechnologies improved animals immunity, oxidation resistance, and production and decreased antibiotic use and manure odor. For instance, the average daily gain of pig increased by 9.9–15.3 %, the ratio of feedstuff to weight decreased by 7.5–10.3 %, and the diarrhea rate decreased by 55.6–66.7 %. (4) Nanotechnologies for water disinfection in fishpond increased water quality and increased yields and survivals of fish and prawn. (5) Nanotechnologies for pesticides increased pesticide performance threefold and reduced cost by 50 %. (6) Nano urea increased the agronomic efficiency of nitrogen fertilization by 44.5 % and the grain yield by 10.2 %, versus normal urea. (7) Nanotechnologies are widely used for rapid detection and diagnosis, notably for clinical examination, food safety testing, and animal epidemic surveillance. (8) Nanotechnologies may also have adverse effects that are so far not well known.",
"title": ""
},
{
"docid": "neg:1840447_12",
"text": "Using dedicated hardware to do machine learning typically ends up in disaster because of cost, obsolescence, and poor software. The popularization of graphic processing units (GPUs), which are now available on every PC, provides an attractive alternative. We propose a generic 2-layer fully connected neural network GPU implementation which yields over 3/spl times/ speedup for both training and testing with respect to a 3 GHz P4 CPU.",
"title": ""
},
{
"docid": "neg:1840447_13",
"text": "During the past several years, there have been a significant number of researches conducted in the area of semiconductor final test scheduling problems (SFTSP). As specific example of simultaneous multiple resources scheduling problem (SMRSP), intelligent manufacturing planning and scheduling based on meta-heuristic methods, such as Genetic Algorithm (GA), Simulated Annealing (SA), and Particle Swarm Optimization (PSO), have become the common tools for finding satisfactory solutions within reasonable computational times in real settings. However, limited researches were aiming at analyze the effects of interdependent relations during group decision-making activities. Moreover for complex and large problems, local constraints and objectives from each managerial entity, and their contributions towards the global objectives cannot be effectively represented in a single model. In this paper, we propose a novel Cooperative Estimation of Distribution Algorithm (CEDA) to overcome the challenges mentioned before. The CEDA is established based on divide-and-conquer strategy and a co-evolutionary framework. Considerable experiments have been conducted and the results confirmed that CEDA outperforms recent research results for scheduling problems in FMS (Flexible Manufacturing Systems).",
"title": ""
},
{
"docid": "neg:1840447_14",
"text": "For robotic manipulators that are redundant or with high degrees of freedom (dof ), an analytical solution to the inverse kinematics is very difficult or impossible. Pioneer 2 robotic arm (P2Arm) is a recently developed and widely used 5-dof manipulator. There is no effective solution to its inverse kinematics to date. This paper presents a first complete analytical solution to the inverse kinematics of the P2Arm, which makes it possible to control the arm to any reachable position in an unstructured environment. The strategies developed in this paper could also be useful for solving the inverse kinematics problem of other types of robotic arms.",
"title": ""
},
{
"docid": "neg:1840447_15",
"text": "When does knowledge transfer benefit performance? Combining field data from a global consulting firm with an agent-based model, we examine how efforts to supplement one’s knowledge from coworkers interact with individual, organizational, and environmental characteristics to impact organizational performance. We find that once cost and interpersonal exchange are included in the analysis, the impact of knowledge transfer is highly contingent. Depending on specific characteristics and circumstances, knowledge transfer can better, matter little to, or even harm performance. Three illustrative studies clarify puzzling past results and offer specific boundary conditions: (1) At the individual level, better organizational support for employee learning diminishes the benefit of knowledge transfer for organizational performance. (2) At the organization level, broader access to organizational memory makes global knowledge transfer less beneficial to performance. (3) When the organizational environment becomes more turbulent, the organizational performance benefits of knowledge transfer decrease. The findings imply that organizations may forgo investments in both organizational memory and knowledge exchange, that wide-ranging knowledge exchange may be unimportant or even harmful for performance, and that organizations operating in turbulent environments may find that investment in knowledge exchange undermines performance rather than enhances it. At a time when practitioners are urged to make investments in facilitating knowledge transfer and collaboration, appreciation of the complex relationship between knowledge transfer and performance will help in reaping benefits while avoiding liabilities.",
"title": ""
},
{
"docid": "neg:1840447_16",
"text": "Typically in neuroimaging we are looking to extract some pertinent information from imperfect, noisy images of the brain. This might be the inference of percent changes in blood flow in perfusion FMRI data, segmentation of subcortical structures from structural MRI, or inference of the probability of an anatomical connection between an area of cortex and a subthalamic nucleus using diffusion MRI. In this article we will describe how Bayesian techniques have made a significant impact in tackling problems such as these, particularly in regards to the analysis tools in the FMRIB Software Library (FSL). We shall see how Bayes provides a framework within which we can attempt to infer on models of neuroimaging data, while allowing us to incorporate our prior belief about the brain and the neuroimaging equipment in the form of biophysically informed or regularising priors. It allows us to extract probabilistic information from the data, and to probabilistically combine information from multiple modalities. Bayes can also be used to not only compare and select between models of different complexity, but also to infer on data using committees of models. Finally, we mention some analysis scenarios where Bayesian methods are impractical, and briefly discuss some practical approaches that we have taken in these cases.",
"title": ""
},
{
"docid": "neg:1840447_17",
"text": "1 During the summer of 2005, I discovered that there was not a copy of my dissertation available from the library at McGill University. I was, however, able to obtain a copy of it on microfilm from another university that had initially obtained it on interlibrary loan. I am most grateful to Vicki Galbraith who typed this version from that copy, which except for some minor variations due to differences in type size and margins (plus this footnote, of course) is identical to that on the microfilm. ACKNOWLEDGEMENTS 1 The writer is grateful to Dr. J. T. McIlhone, Associate General Director in Charge of English Classes of the Montreal Catholic School Board, for his kind cooperation in making subjects available, and to the Principals and French teachers of each high school for their assistance and cooperation during the testing programs. advice on the statistical analysis. In addition, the writer would like to express his appreciation to Mr. K. Tunstall for his assistance in the difficult task of interviewing the parents of each student. Finally, the writer would like to express his gratitude to Janet W. Gardner for her invaluable assistance in all phases of the research program.",
"title": ""
},
{
"docid": "neg:1840447_18",
"text": "Assessing distance betweeen the true and the sample distribution is a key component of many state of the art generative models, such as Wasserstein Autoencoder (WAE). Inspired by prior work on Sliced-Wasserstein Autoencoders (SWAE) and kernel smoothing we construct a new generative model – Cramer-Wold AutoEncoder (CWAE). CWAE cost function, based on introduced Cramer-Wold distance between samples, has a simple closed-form in the case of normal prior. As a consequence, while simplifying the optimization procedure (no need of sampling necessary to evaluate the distance function in the training loop), CWAE performance matches quantitatively and qualitatively that of WAE-MMD (WAE using maximum mean discrepancy based distance function) and often improves upon SWAE.",
"title": ""
},
{
"docid": "neg:1840447_19",
"text": "A security-enhanced agile software development process, SEAP, is introduced in the development of a mobile money transfer system at Ericsson Corp. A specific characteristic of SEAP is that it includes a security group consisting of four different competences, i.e., Security manager, security architect, security master and penetration tester. Another significant feature of SEAP is an integrated risk analysis process. In analyzing risks in the development of the mobile money transfer system, a general finding was that SEAP either solves risks that were previously postponed or solves a larger proportion of the risks in a timely manner. The previous software development process, i.e., The baseline process of the comparison outlined in this paper, required 2.7 employee hours spent for every risk identified in the analysis process compared to, on the average, 1.5 hours for the SEAP. The baseline development process left 50% of the risks unattended in the software version being developed, while SEAP reduced that figure to 22%. Furthermore, SEAP increased the proportion of risks that were corrected from 12.5% to 67.1%, i.e., More than a five times increment. This is important, since an early correction may avoid severe attacks in the future. The security competence in SEAP accounts for 5% of the personnel cost in the mobile money transfer system project. As a comparison, the corresponding figure, i.e., For security, was 1% in the previous development process.",
"title": ""
}
] |
1840448 | Experimental Investigation of Light-Gauge Steel Plate Shear Walls | [
{
"docid": "pos:1840448_0",
"text": "plate is ntation of by plastic plex, wall ection of procedure Abstract: A revised procedure for the design of steel plate shear walls is proposed. In this procedure the thickness of the infill found using equations that are derived from the plastic analysis of the strip model, which is an accepted model for the represe steel plate shear walls. Comparisons of experimentally obtained ultimate strengths of steel plate shear walls and those predicted analysis are given and reasonable agreement is observed. Fundamental plastic collapse mechanisms for several, more com configurations are also given. Additionally, an existing codified procedure for the design of steel plate walls is reviewed and a s this procedure which could lead to designs with less-than-expected ultimate strength is identified. It is shown that the proposed eliminates this possibility without changing the other valid sections of the current procedure.",
"title": ""
}
] | [
{
"docid": "neg:1840448_0",
"text": "Dense video captioning aims to generate text descriptions for all events in an untrimmed video. This involves both detecting and describing events. Therefore, all previous methods on dense video captioning tackle this problem by building two models, i.e. an event proposal and a captioning model, for these two sub-problems. The models are either trained separately or in alternation. This prevents direct influence of the language description to the event proposal, which is important for generating accurate descriptions. To address this problem, we propose an end-to-end transformer model for dense video captioning. The encoder encodes the video into appropriate representations. The proposal decoder decodes from the encoding with different anchors to form video event proposals. The captioning decoder employs a masking network to restrict its attention to the proposal event over the encoding feature. This masking network converts the event proposal to a differentiable mask, which ensures the consistency between the proposal and captioning during training. In addition, our model employs a self-attention mechanism, which enables the use of efficient non-recurrent structure during encoding and leads to performance improvements. We demonstrate the effectiveness of this end-to-end model on ActivityNet Captions and YouCookII datasets, where we achieved 10.12 and 6.58 METEOR score, respectively.",
"title": ""
},
{
"docid": "neg:1840448_1",
"text": "Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop rich insights into various phenomena of interest that cannot be fully understood using only a quantitative or a qualitative method. Notwithstanding the benefits and repeated calls for such work, there is a dearth of mixed methods research in information systems. Building on the literature on recent methodological advances in mixed methods research, we develop a set of guidelines for conducting mixed methods research in IS. We particularly elaborate on three important aspects of conducting mixed methods research: (1) appropriateness of a mixed methods approach; (2) development of meta-inferences (i.e., substantive theory) from mixed methods research; and (3) assessment of the quality of meta-inferences (i.e., validation of mixed methods research). The applicability of these guidelines is illustrated using two published IS papers that used mixed methods.",
"title": ""
},
{
"docid": "neg:1840448_2",
"text": "Experience replay is one of the most commonly used approaches to improve the sample efficiency of reinforcement learning algorithms. In this work, we propose an approach to select and replay sequences of transitions in order to accelerate the learning of a reinforcement learning agent in an off-policy setting. In addition to selecting appropriate sequences, we also artificially construct transition sequences using information gathered from previous agent-environment interactions. These sequences, when replayed, allow value function information to trickle down to larger sections of the state/state-action space, thereby making the most of the agent's experience. We demonstrate our approach on modified versions of standard reinforcement learning tasks such as the mountain car and puddle world problems and empirically show that it enables faster, and more accurate learning of value functions as compared to other forms of experience replay. Further, we briefly discuss some of the possible extensions to this work, as well as applications and situations where this approach could be particularly useful.",
"title": ""
},
{
"docid": "neg:1840448_3",
"text": "XTS-AES is an advanced mode of AES for data protection of sector-based devices. Compared to other AES modes, it features two secret keys instead of one, and an additional tweak for each data block. These characteristics make the mode not only resistant against cryptoanalysis attacks, but also more challenging for side-channel attack. In this paper, we propose two attack methods on XTS-AES overcoming these challenges. In the first attack, we analyze side-channel leakage of the particular modular multiplication in XTS-AES mode. In the second one, we utilize the relationship between two consecutive block tweaks and propose a method to work around the masking of ciphertext by the tweak. These attacks are verified on an FPGA implementation of XTS-AES. The results show that XTS-AES is susceptible to side-channel power analysis attacks, and therefore dedicated protections are required for security of XTS-AES in storage devices.",
"title": ""
},
{
"docid": "neg:1840448_4",
"text": "Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840448_5",
"text": "Social network sites (SNSs) are becoming an increasingly popular resource for both students and adults, who use them to connect with and maintain relationships with a variety of ties. For many, the primary function of these sites is to consume and distribute personal content about the self. Privacy concerns around sharing information in a public or semi-public space are amplified by SNSs’ structural characteristics, which may obfuscate the true audience of these disclosures due to their technical properties (e.g., persistence, searchability) and dynamics of use (e.g., invisible audiences, context collapse) (boyd, 2008b). Early work on the topic focused on the privacy pitfalls of Facebook and other SNSs (e.g., Acquisti & Gross, 2006; Barnes, 2006; Gross & Acquisti, 2005) and argued that individuals were (perhaps inadvertently) disclosing information that might be inappropriate for some audiences, such as future employers, or that might enable identity theft or other negative outcomes.",
"title": ""
},
{
"docid": "neg:1840448_6",
"text": "This paper describes MITRE’s participation in the Paraphrase and Semantic Similarity in Twitter task (SemEval-2015 Task 1). This effort placed first in Semantic Similarity and second in Paraphrase Identification with scores of Pearson’s r of 61.9%, F1 of 66.7%, and maxF1 of 72.4%. We detail the approaches we explored including mixtures of string matching metrics, alignments using tweet-specific distributed word representations, recurrent neural networks for modeling similarity with those alignments, and distance measurements on pooled latent semantic features. Logistic regression is used to tie the systems together into the ensembles submitted for evaluation.",
"title": ""
},
{
"docid": "neg:1840448_7",
"text": "Chronic lower leg pain results from various conditions, most commonly, medial tibial stress syndrome, stress fracture, chronic exertional compartment syndrome, nerve entrapment, and popliteal artery entrapment syndrome. Symptoms associated with these conditions often overlap, making a definitive diagnosis difficult. As a result, an algorithmic approach was created to aid in the evaluation of patients with complaints of lower leg pain and to assist in defining a diagnosis by providing recommended diagnostic studies for each condition. A comprehensive physical examination is imperative to confirm a diagnosis and should begin with an inquiry regarding the location and onset of the patient's pain and tenderness. Confirmation of the diagnosis requires performing the appropriate diagnostic studies, including radiographs, bone scans, magnetic resonance imaging, magnetic resonance angiography, compartmental pressure measurements, and arteriograms. Although most conditions causing lower leg pain are treated successfully with nonsurgical management, some syndromes, such as popliteal artery entrapment syndrome, may require surgical intervention. Regardless of the form of treatment, return to activity must be gradual and individualized for each patient to prevent future athletic injury.",
"title": ""
},
{
"docid": "neg:1840448_8",
"text": "The primary focus of this study is to understand the current port operating condition and recommend short term measures to improve traffic condition in the port of Chennai. The cause of congestion is identified based on the data collected and observation made at port gates as well as at terminal gates in Chennai port. A simulation model for the existing road layout is developed in micro-simulation software VISSIM and is calibrated to reflect the prevailing condition inside the port. The data such as truck origin/destination, hourly inflow and outflow of trucks, speed, and stopping time at checking booths are used as input. Routing data is used to direct traffic to specific terminal or dock within the port. Several alternative scenarios are developed and simulated to get results of the key performance indicators. A comparative and detailed analysis of these indicators is used to evaluate recommendations to reduce congestion inside the port.",
"title": ""
},
{
"docid": "neg:1840448_9",
"text": "This paper attempts to review examples of the use of storytelling and narrative in immersive virtual reality worlds. Particular attention is given to the way narrative is incorporated in artistic, cultural, and educational applications through the development of specific sensory and perceptual experiences that are based on characteristics inherent to virtual reality, such as immersion, interactivity, representation, and illusion. Narrative development is considered on three axes: form (visual representation), story (emotional involvement), and history (authenticated cultural content) and how these can come together.",
"title": ""
},
{
"docid": "neg:1840448_10",
"text": "This paper presents a new design of high frequency DC/AC inverter for home applications using fuel cells or photovoltaic array sources. A battery bank parallel to the DC link is provided to take care of the slow dynamic response of the source. The design is based on a push-pull DC/DC converter followed by a full-bridge PWM inverter topology. The nominal power rating is 10 kW. Actual design parameters, procedure and experimental results of a 1.5 kW prototype are provided. The objective of this paper is to explore the possibility of making renewable sources of energy utility interactive by means of low cost power electronic interface.",
"title": ""
},
{
"docid": "neg:1840448_11",
"text": "Effects of Music Therapy on Prosocial Behavior of Students with Autism and Developmental Disabilities by Catherine L. de Mers Dr. Matt Tincani, Examination Committee Chair Assistant Professor o f Special Education University o f Nevada, Las Vegas This researeh study employed a multiple baseline across participants design to investigate the effects o f music therapy intervention on hitting, screaming, and asking o f three children with autism and/or developmental disabilities. Behaviors were observed and recorded during 10-minute free-play sessions both during baseline and immediately after musie therapy sessions during intervention. Interobserver agreement and procedural fidelity data were collected. Music therapy sessions were modeled on literature pertaining to music therapy with children with autism. In addition, social validity surveys were eollected to answer research questions pertaining to the social validity of music therapy as an intervention. Findings indicate that music therapy produced moderate and gradual effects on hitting, screaming, and asking. Hitting and sereaming deereased following intervention, while asking increased. Intervention effects were maintained three weeks following",
"title": ""
},
{
"docid": "neg:1840448_12",
"text": "We propose a method of generating teaching policies for use in intelligent tutoring systems (ITS) for concept learning tasks <xref ref-type=\"bibr\" rid=\"ref1\">[1]</xref> , e.g., teaching students the meanings of words by showing images that exemplify their meanings à la Rosetta Stone <xref ref-type=\"bibr\" rid=\"ref2\">[2]</xref> and Duo Lingo <xref ref-type=\"bibr\" rid=\"ref3\">[3]</xref> . The approach is grounded in control theory and capitalizes on recent work by <xref ref-type=\"bibr\" rid=\"ref4\">[4] </xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> that frames the “teaching” problem as that of finding approximately optimal teaching policies for approximately optimal learners (AOTAOL). Our work expands on <xref ref-type=\"bibr\" rid=\"ref4\">[4]</xref> , <xref ref-type=\"bibr\" rid=\"ref5\">[5]</xref> in several ways: (1) We develop a novel student model in which the teacher's actions can <italic>partially </italic> eliminate hypotheses about the curriculum. (2) With our student model, inference can be conducted <italic> analytically</italic> rather than numerically, thus allowing computationally efficient planning to optimize learning. (3) We develop a reinforcement learning-based hierarchical control technique that allows the teaching policy to search through <italic>deeper</italic> learning trajectories. We demonstrate our approach in a novel ITS for foreign language learning similar to Rosetta Stone and show that the automatically generated AOTAOL teaching policy performs favorably compared to two hand-crafted teaching policies.",
"title": ""
},
{
"docid": "neg:1840448_13",
"text": "A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.",
"title": ""
},
{
"docid": "neg:1840448_14",
"text": "There are two conflicting perspectives regarding the relationship between profanity and dishonesty. These two forms of norm-violating behavior share common causes and are often considered to be positively related. On the other hand, however, profanity is often used to express one's genuine feelings and could therefore be negatively related to dishonesty. In three studies, we explored the relationship between profanity and honesty. We examined profanity and honesty first with profanity behavior and lying on a scale in the lab (Study 1; N = 276), then with a linguistic analysis of real-life social interactions on Facebook (Study 2; N = 73,789), and finally with profanity and integrity indexes for the aggregate level of U.S. states (Study 3; N = 50 states). We found a consistent positive relationship between profanity and honesty; profanity was associated with less lying and deception at the individual level and with higher integrity at the society level.",
"title": ""
},
{
"docid": "neg:1840448_15",
"text": "The mood of a text and the intention of the writer can be reflected in the typeface. However, in designing a typeface, it is difficult to keep the style of various characters consistent, especially for languages with lots of morphological variations such as Chinese. In this paper, we propose a Typeface Completion Network (TCN) which takes one character as an input, and automatically completes the entire set of characters in the same style as the input characters. Unlike existing models proposed for image-to-image translation, TCN embeds a character image into two separate vectors representing typeface and content. Combined with a reconstruction loss from the latent space, and with other various losses, TCN overcomes the inherent difficulty in designing a typeface. Also, compared to previous image-to-image translation models, TCN generates high quality character images of the same typeface with a much smaller number of model parameters. We validate our proposed model on the Chinese and English character datasets, which is paired data, and the CelebA dataset, which is unpaired data. In these datasets, TCN outperforms recently proposed state-of-the-art models for image-to-image translation. The source code of our model is available at https://github.com/yongqyu/TCN.",
"title": ""
},
{
"docid": "neg:1840448_16",
"text": "Tomatoes are well-known vegetables, grown and eaten around the world due to their nutritional benefits. The aim of this research was to determine the chemical composition (dry matter, soluble solids, titritable acidity, vitamin C, lycopene), the taste index and maturity in three cherry tomato varieties (Sakura, Sunstream, Mathew) grown and collected from greenhouse at different stages of ripening. The output of the analyses showed that there were significant differences in the mean values among the analysed parameters according to the stage of ripening and variety. During ripening, the content of soluble solids increases on average two times in all analyzed varieties; the highest content of vitamin C and lycopene was determined in tomatoes of Sunstream variety in red stage. The highest total acidity expressed as g of citric acid 100 g was observed in pink stage (variety Sakura) or a breaker stage (varieties Sunstream and Mathew). The taste index of the variety Sakura was higher at all analyzed ripening stages in comparison with other varieties. This shows that ripening stages have a significant effect on tomato biochemical composition along with their variety.",
"title": ""
},
{
"docid": "neg:1840448_17",
"text": "In this paper we look at how to sparsify a graph i.e. how to reduce the edgeset while keeping the nodes intact, so as to enable faster graph clustering without sacrificing quality. The main idea behind our approach is to preferentially retain the edges that are likely to be part of the same cluster. We propose to rank edges using a simple similarity-based heuristic that we efficiently compute by comparing the minhash signatures of the nodes incident to the edge. For each node, we select the top few edges to be retained in the sparsified graph. Extensive empirical results on several real networks and using four state-of-the-art graph clustering and community discovery algorithms reveal that our proposed approach realizes excellent speedups (often in the range 10-50), with little or no deterioration in the quality of the resulting clusters. In fact, for at least two of the four clustering algorithms, our sparsification consistently enables higher clustering accuracies.",
"title": ""
},
{
"docid": "neg:1840448_18",
"text": "nature protocols | VOL.7 NO.11 | 2012 | 1983 IntroDuctIon In a typical histology study, it is necessary to make thin sections of blocks of frozen or fixed tissue for microscopy. This process has major limitations for obtaining a 3D picture of structural components and the distribution of cells within tissues. For example, in axon regeneration studies, after labeling the injured axons, it is common that the tissue of interest (e.g., spinal cord, optic nerve) is sectioned. Subsequently, when tissue sections are analyzed under the microscope, only short fragments of axons are observed within each section; hence, the 3D information of axonal structures is lost. Because of this confusion, these fragmented axonal profiles might be interpreted as regenerated axons even though they could be spared axons1. In addition, the growth trajectories and target regions of the regenerating axons cannot be identified by visualization of axonal fragments. Similar problems could occur in cancer and immunology studies when only small fractions of target cells are observed within large organs. To avoid these limitations and problems, tissues ideally should be imaged at high spatial resolution without sectioning. However, optical imaging of thick tissues is limited mostly because of scattering of imaging light through the thick tissues, which contain various cellular and extracellular structures with different refractive indices. The imaging light traveling through different structures scatters and loses its excitation and emission efficiency, resulting in a lower resolution and imaging depth2,3. Optical clearing of tissues by organic solvents, which make the biological tissue transparent by matching the refractory indexes of different tissue layers to the solvent, has become a prominent method for imaging thick tissues2,4. In cleared tissues, the imaging light does not scatter and travels unobstructed throughout the different tissue layers. For this purpose, the first tissue clearing method was developed about a century ago by Spalteholz, who used a mixture of benzyl alcohol and methyl salicylate to clear large organs such as the heart5,6. In general, the first step of tissue clearing is tissue dehydration, owing to the low refractive index of water compared with cellular structures containing proteins and lipids4. Subsequently, dehydrated tissue is impregnated with an optical clearing agent, such as glucose7, glycerol8, benzyl alcohol–benzyl benzoate (BABB, also known as Murray’s clear)4,9–13 or dibenzyl ether (DBE)13,14, which have approximately the same refractive index as the impregnated tissue. At the end of the clearing procedure, the cleared tissue hardens and turns transparent, and thus resembles glass.",
"title": ""
},
{
"docid": "neg:1840448_19",
"text": "We present a novel technique to automatically colorize grayscale images that combines both global priors and local image features. Based on Convolutional Neural Networks, our deep network features a fusion layer that allows us to elegantly merge local information dependent on small image patches with global priors computed using the entire image. The entire framework, including the global and local priors as well as the colorization model, is trained in an end-to-end fashion. Furthermore, our architecture can process images of any resolution, unlike most existing approaches based on CNN. We leverage an existing large-scale scene classification database to train our model, exploiting the class labels of the dataset to more efficiently and discriminatively learn the global priors. We validate our approach with a user study and compare against the state of the art, where we show significant improvements. Furthermore, we demonstrate our method extensively on many different types of images, including black-and-white photography from over a hundred years ago, and show realistic colorizations.",
"title": ""
}
] |
1840449 | Beyond the Prince : Race and Gender Role Portrayal in | [
{
"docid": "pos:1840449_0",
"text": "The popular Disney Princess line includes nine films (e.g., Snow White, Beauty and the Beast) and over 25,000 marketable products. Gender role depictions of the prince and princess characters were examined with a focus on their behavioral characteristics and climactic outcomes in the films. Results suggest that the prince and princess characters differ in their portrayal of traditionally masculine and feminine characteristics, these gender role portrayals are complex, and trends towards egalitarian gender roles are not linear over time. Content coding analyses demonstrate that all of the movies portray some stereotypical representations of gender, including the most recent film, The Princess and the Frog. Although both the male and female roles have changed over time in the Disney Princess line, the male characters exhibit more androgyny throughout and less change in their gender role portrayals.",
"title": ""
}
] | [
{
"docid": "neg:1840449_0",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840449_1",
"text": "Deep learning has received much attention as of the most powerful approaches for multimodal representation learning in recent years. An ideal model for multimodal data can reason about missing modalities using the available ones, and usually provides more information when multiple modalities are being considered. All the previous deep models contain separate modality-specific networks and find a shared representation on top of those networks. Therefore, they only consider high level interactions between modalities to find a joint representation for them. In this paper, we propose a multimodal deep learning framework (MDLCW) that exploits the cross weights between representation of modalities, and try to gradually learn interactions of the modalities in a deep network manner (from low to high level interactions). Moreover, we theoretically show that considering these interactions provide more intra-modality information, and introduce a multi-stage pre-training method that is based on the properties of multi-modal data. In the proposed framework, as opposed to the existing deep methods for multi-modal data, we try to reconstruct the representation of each modality at a given level, with representation of other modalities in the previous layer. Extensive experimental results show that the proposed model outperforms state-of-the-art information retrieval methods for both image and text queries on the PASCAL-sentence and SUN-Attribute databases.",
"title": ""
},
{
"docid": "neg:1840449_2",
"text": "The fast growing deep learning technologies have become the main solution of many machine learning problems for medical image analysis. Deep convolution neural networks (CNNs), as one of the most important branch of the deep learning family, have been widely investigated for various computer-aided diagnosis tasks including long-term problems and continuously emerging new problems. Image contour detection is a fundamental but challenging task that has been studied for more than four decades. Recently, we have witnessed the significantly improved performance of contour detection thanks to the development of CNNs. Beyond purusing performance in existing natural image benchmarks, contour detection plays a particularly important role in medical image analysis. Segmenting various objects from radiology images or pathology images requires accurate detection of contours. However, some problems, such as discontinuity and shape constraints, are insufficiently studied in CNNs. It is necessary to clarify the challenges to encourage further exploration. The performance of CNN based contour detection relies on the state-of-the-art CNN architectures. Careful investigation of their design principles and motivations is critical and beneficial to contour detection. In this paper, we first review recent development of medical image contour detection and point out the current confronting challenges and problems. We discuss the development of general CNNs and their applications in image contours (or edges) detection. We compare those methods in detail, clarify their strengthens and weaknesses. Then we review their recent applications in medical image analysis and point out limitations, with the goal to light some potential directions in medical image analysis. We expect the paper to cover comprehensive technical ingredients of advanced CNNs to enrich the study in the medical image domain. 1E-mail: zizhaozhang@ufl.edu Preprint submitted to arXiv August 26, 2018 ar X iv :1 70 8. 07 28 1v 1 [ cs .C V ] 2 4 A ug 2 01 7",
"title": ""
},
{
"docid": "neg:1840449_3",
"text": "The rise and development of O2O e-commerce has brought new opportunities for the enterprise, and also proposed the new challenge to the traditional electronic commerce. The formation process of customer loyalty of O2O e-commerce environment is a complex psychological process. This paper will combine the characteristics of O2O e-commerce, customer's consumer psychology and consumer behavior characteristics to build customer loyalty formation mechanism model which based on the theory of reasoned action model. The related factors of the model including the customer perceived value, customer satisfaction, customer trust and customer switching costs. By exploring the factors affecting customer’ loyalty of O2O e-commerce can provide reference and basis for enterprises to develop e-commerce and better for O2O e-commerce enterprises to develop marketing strategy and enhance customer loyalty. At the end of this paper will also put forward some targeted suggestions for O2O e-commerce enterprises.",
"title": ""
},
{
"docid": "neg:1840449_4",
"text": "This paper presents new in-line pseudoelliptic bandpass filters with nonresonating nodes. Microwave bandpass filters based on dual- and triple-mode cavities are introduced. In each case, the transmission zeros (TZs) are individually generated and controlled by dedicated resonators. Dual- and triple-mode cavities are kept homogeneous and contain no coupling or tuning elements. A third-order filter with a TZ extracted at its center is designed by cascading two dual-mode cavities. A direct design technique of this filter is introduced and shown to produce accurate initial designs for narrow-band cases. A six-pole filter is designed by cascading two triple-mode cavities. Measured results are presented to demonstrate the validity of this novel approach.",
"title": ""
},
{
"docid": "neg:1840449_5",
"text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1",
"title": ""
},
{
"docid": "neg:1840449_6",
"text": "Both patients and clinicians may incorrectly diagnose vulvovaginitis symptoms. Patients often self-treat with over-the-counter antifungals or home remedies, although they are unable to distinguish among the possible causes of their symptoms. Telephone triage practices and time constraints on office visits may also hamper effective diagnosis. This review is a guide to distinguish potential causes of vulvovaginal symptoms. The first section describes both common and uncommon conditions associated with vulvovaginitis, including infectious vulvovaginitis, allergic contact dermatitis, systemic dermatoses, rare autoimmune diseases, and neuropathic vulvar pain syndromes. The focus is on the clinical presentation, specifically 1) the absence or presence and characteristics of vaginal discharge; 2) the nature of sensory symptoms (itch and/or pain, localized or generalized, provoked, intermittent, or chronic); and 3) the absence or presence of mucocutaneous changes, including the types of lesions observed and the affected tissue. Additionally, this review describes how such features of the clinical presentation can help identify various causes of vulvovaginitis.",
"title": ""
},
{
"docid": "neg:1840449_7",
"text": "Information practices that use personal, financial, and health-related information are governed by US laws and regulations to prevent unauthorized use and disclosure. To ensure compliance under the law, the security and privacy requirements of relevant software systems must properly be aligned with these regulations. However, these regulations describe stakeholder rules, called rights and obligations, in complex and sometimes ambiguous legal language. These \"rules\" are often precursors to software requirements that must undergo considerable refinement and analysis before they become implementable. To support the software engineering effort to derive security requirements from regulations, we present a methodology for directly extracting access rights and obligations from regulation texts. The methodology provides statement-level coverage for an entire regulatory document to consistently identify and infer six types of data access constraints, handle complex cross references, resolve ambiguities, and assign required priorities between access rights and obligations to avoid unlawful information disclosures. We present results from applying this methodology to the entire regulation text of the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule.",
"title": ""
},
{
"docid": "neg:1840449_8",
"text": "Automated Facial Expression Recognition has remained a challenging and interesting problem in computer vision. The recognition of facial expressions is difficult problem for machine learning techniques, since people can vary significantly in the way they show their expressions. Deep learning is a new area of research within machine learning method which can classify images of human faces into emotion categories using Deep Neural Networks (DNN). Convolutional neural networks (CNN) have been widely used to overcome the difficulties in facial expression classification. In this paper, we present a new architecture network based on CNN for facial expressions recognition. We fine tuned our architecture with Visual Geometry Group model (VGG) to improve results. To evaluate our architecture we tested it with many largely public databases (CK+, MUG, and RAFD). Obtained results show that the CNN approach is very effective in image expression recognition on many public databases which achieve an improvements in facial expression analysis.",
"title": ""
},
{
"docid": "neg:1840449_9",
"text": "This paper proposes a model of information aesthetics in the context of information visualization. It addresses the need to acknowledge a recently emerging number of visualization projects that combine information visualization techniques with principles of creative design. The proposed model contributes to a better understanding of information aesthetics as a potentially independent research field within visualization that specifically focuses on the experience of aesthetics, dataset interpretation and interaction. The proposed model is based on analysing existing visualization techniques by their interpretative intent and data mapping inspiration. It reveals information aesthetics as the conceptual link between information visualization and visualization art, and includes the fields of social and ambient visualization. This model is unique in its focus on aesthetics as the artistic influence on the technical implementation and intended purpose of a visualization technique, rather than subjective aesthetic judgments of the visualization outcome. This research provides a framework for understanding aesthetics in visualization, and allows for new design guidelines and reviewing criteria.",
"title": ""
},
{
"docid": "neg:1840449_10",
"text": "With the prevalence of accessible depth sensors, dynamic human body skeletons have attracted much attention as a robust modality for action recognition. Previous methods model skeletons based on RNN or CNN, which has limited expressive power for irregular joints. In this paper, we represent skeletons naturally on graphs and propose a generalized graph convolutional neural networks (GGCN) for skeleton-based action recognition, aiming to capture space-time variation via spectral graph theory. In particular, we construct a generalized graph over consecutive frames, where each joint is not only connected to its neighboring joints in the same frame strongly or weakly, but also linked with relevant joints in the previous and subsequent frames. The generalized graphs are then fed into GGCN along with the coordinate matrix of the skeleton sequence for feature learning, where we deploy high-order and fast Chebyshev approximation of spectral graph convolution in the network. Experiments show that we achieve the state-of-the-art performance on the widely used NTU RGB+D, UT-Kinect and SYSU 3D datasets.",
"title": ""
},
{
"docid": "neg:1840449_11",
"text": "Content distribution on today's Internet operates primarily in two modes: server-based and peer-to-peer (P2P). To leverage the advantages of both modes while circumventing their key limitations, a third mode: peer-to-server/peer (P2SP) has emerged in recent years. Although P2SP can provide efficient hybrid server-P2P content distribution, P2SP generally works in a closed manner by only utilizing its private owned servers to accelerate its private organized peer swarms. Consequently, P2SP still has its limitations in both content abundance and server bandwidth. To this end, the fourth mode (or says a generalized mode of P2SP) has appeared as \"open-P2SP\" that integrates various third-party servers, contents, and data transfer protocols all over the Internet into a large, open, and federated P2SP platform. In this paper, based on a large-scale commercial open-P2SP system named \"QQXuanfeng\" , we investigate the key challenging problems, practical designs and real-world performances of open-P2SP. Such \"white-box\" study of open-P2SP provides solid experiences and helpful heuristics to the designers of similar systems.",
"title": ""
},
{
"docid": "neg:1840449_12",
"text": "This paper highlights different security threats and vulnerabilities that is being challenged in smart-grid utilizing Distributed Network Protocol (DNP3) as a real time communication protocol. Experimentally, we will demonstrate two scenarios of attacks, unsolicited message attack and data set injection. The experiments were run on a computer virtual environment and then simulated in DETER testbed platform. The use of intrusion detection system will be necessary to identify attackers targeting different part of the smart grid infrastructure. Therefore, mitigation techniques will be used to ensure a healthy check of the network and we will propose the use of host-based intrusion detection agent at each Intelligent Electronic Device (IED) for the purpose of detecting the intrusion and mitigating it. Performing attacks, attack detection, prevention and counter measures will be our primary goal to achieve in this research paper.",
"title": ""
},
{
"docid": "neg:1840449_13",
"text": "MEC is an emerging paradigm that provides computing, storage, and networking resources within the edge of the mobile RAN. MEC servers are deployed on a generic computing platform within the RAN, and allow for delay-sensitive and context-aware applications to be executed in close proximity to end users. This paradigm alleviates the backhaul and core network and is crucial for enabling low-latency, high-bandwidth, and agile mobile services. This article envisions a real-time, context-aware collaboration framework that lies at the edge of the RAN, comprising MEC servers and mobile devices, and amalgamates the heterogeneous resources at the edge. Specifically, we introduce and study three representative use cases ranging from mobile edge orchestration, collaborative caching and processing, and multi-layer interference cancellation. We demonstrate the promising benefits of the proposed approaches in facilitating the evolution to 5G networks. Finally, we discuss the key technical challenges and open research issues that need to be addressed in order to efficiently integrate MEC into the 5G ecosystem.",
"title": ""
},
{
"docid": "neg:1840449_14",
"text": "This paper presents simulation and experimental investigation results of steerable integrated lens antennas (ILAs) operating in the 60 GHz frequency band. The feed array of the ILAs is comprised by four switched aperture coupled microstrip antenna (ACMA) elements that allows steering between four different antenna main beam directions in one plane. The dielectric lenses of the designed ILAs are extended hemispherical quartz (ε = 3.8) lenses with the radiuses of 7.5 and 12.5 mm. The extension lengths of the lenses are selected through the electromagnetic optimization in order to achieve the maximum ILAs directivities and also the minimum directivity degradations of the outer antenna elements in the feed array (± 3 mm displacement) relatively to the inner ones (± 1 mm displacement). Simulated maximum directivities of the boresight beam of the designed ILAs are 19.8 dBi and 23.8 dBi that are sufficient for the steerable antennas for the millimeter-wave WLAN/WPAN communication systems. The feed ACMA array together with the waveguide to microstrip transition dedicated for experimental investigations is fabricated on high frequency and low cost Rogers 4003C substrate. Single Pole Double Through (SPDT) switches from Hittite are used in order to steer the ILA prototypes main beam directions. The experimental results of the fabricated electronically steerable quartz ILA prototypes prove the simulation results and show ±35° and ±22° angle sector coverage for the lenses with the 7.5 and 12.5 mm radiuses respectively.",
"title": ""
},
{
"docid": "neg:1840449_15",
"text": "Sex differences in children’s toy preferences are thought by many to arise from gender socialization. However, evidence from patients with endocrine disorders suggests that biological factors during early development (e.g., levels of androgens) are influential. In this study, we found that vervet monkeys (Cercopithecus aethiops sabaeus) show sex differences in toy preferences similar to those documented previously in children. The percent of contact time with toys typically preferred by boys (a car and a ball) was greater in male vervets (n = 33) than in female vervets (n = 30) (P < .05), whereas the percent of contact time with toys typically preferred by girls (a doll and a pot) was greater in female vervets than in male vervets (P < .01). In contrast, contact time with toys preferred equally by boys and girls (a picture book and a stuffed dog) was comparable in male and female vervets. The results suggest that sexually differentiated object preferences arose early in human evolution, prior to the emergence of a distinct hominid lineage. This implies that sexually dimorphic preferences for features (e.g., color, shape, movement) may have evolved from differential selection pressures based on the different behavioral roles of males and females, and that evolved object feature preferences may contribute to present day sexually dimorphic toy preferences in children. D 2002 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840449_16",
"text": "Player experience is difficult to evaluate and report, especially using quantitative methodologies in addition to observations and interviews. One step towards tying quantitative physiological measures of player arousal to player experience reports are Biometric Storyboards (BioSt). They can visualise meaningful relationships between a player's physiological changes and game events. This paper evaluates the usefulness of BioSt to the game industry. We presented the Biometric Storyboards technique to six game developers and interviewed them about the advantages and disadvantages of this technique.",
"title": ""
},
{
"docid": "neg:1840449_17",
"text": "Cellular Automata (CA) have attracted growing attention in urban simulation because their capability in spatial modelling is not fully developed in GIS. This paper discusses how cellular automata (CA) can be extended and integrated with GIS to help planners to search for better urban forms for sustainable development. The cellular automata model is built within a grid-GIS system to facilitate easy access to GIS databases for constructing the constraints. The essence of the model is that constraint space is used to regulate cellular space. Local, regional and global constraints play important roles in a ecting modelling results. In addition, ‘grey’ cells are de ned to represent the degrees or percentages of urban land development during the iterations of modelling for more accurate results. The model can be easily controlled by the parameter k using a power transformation function for calculating the constraint scores. It can be used as a useful planning tool to test the e ects of di erent urban development scenarios. 1. Cellular automata and GIS for urban simulation Cellular automata (CA) were developed by Ulam in the 1940s and soon used by Von Neumann to investigate the logical nature of self-reproducible systems (White and Engelen 1993). A CA system usually consists of four elements—cells, states, neighbourhoods and rules. Cells are the smallest units which must manifest some adjacency or proximity. The state of a cell can change according to transition rules which are de ned in terms of neighbourhood functions. The notion of neighbourhood is central to the CA paradigm (Couclelis 1997), but the de nition of neighbourhood is rather relaxed. CA are cell-based methods that can model two-dimensional space. Because of this underlying feature, it does not take long for geographers to apply CA to simulate land use change, urban development and other changes of geographical phenomena. CA have become especially, useful as a tool for modelling urban spatial dynamics and encouraging results have been documented (Deadman et al. 1993, Batty and Xie 1994a, Batty and Xie 1997, White and Engelen 1997). The advantages are that the future trajectory of urban morphology can be shown virtually during the simulation processes. The rapid development of GIS helps to foster the application of CA in urban Internationa l Journal of Geographica l Information Science ISSN 1365-8816 print/ISSN 1362-3087 online © 2000 Taylor & Francis Ltd http://www.tandf.co.uk/journals/tf/13658816.html X. L i and A. G. Yeh 132 simulation. Some researches indicate that cell-based GIS may indeed serve as a useful tool for implementing cellular automata models for the purposes of geographical analysis (Itami 1994). Although current GIS are not designed for fast iterative computation, cellular automata can still be used by creating batch ® les that contain iterative command sequences. While linking cellular automata to GIS can overcome some of the limitations of current GIS (White and Engelen 1997), CA can bene® t from the useful information provided by GIS in de® ning transition rules. The data realism requirement of CA can be best satis® ed with the aid of GIS (Couclelis 1997). Space no longer needs to be uniform since the spatial di erence equations can be easily developed in the context of GIS (Batty and Xie 1994b). Most current GIS techniques have limitations in modelling changes in the landscape over time, but the integration of CA and GIS has demonstrated considerable potential (Itami 1988, Deadman et al. 1993). The limitations of contemporary GIS include its poor ability to handle dynamic spatial models, poor performance for many operations, and poor handling of the temporal dimension (Park and Wagner 1997 ). In coupling GIS with CA, CA can serves as an analytical engine to provide a ̄ exible framework for the programming and running of dynamic spatial models. 2. Constrained CA for the planning of sustainable urban development Interest in sustainable urban development has increased rapidly in recent years. Unfortunately, the concept of sustainable urban development is debatable because unique de® nitions and scopes do not exist (Haughton and Hunter 1994). However, this concept is very important to our society in dealing with its increasingly pressing resource and environmental problems. As more nations are implementing this concept in their development plans, it has created important impacts on national policies and urban planning. The concern over sustainable urban development will continue to grow, especially in the developing countries which are undergoing rapid urbanization. A useful way to clarify its ambiguity is to set up some working de® nitions. Some speci® c and narrow de® nitions do exist for special circumstances but there are no commonly accepted de® nitions. The working de® nitions can help to eliminate ambiguities and ® nd out solutions and better alternatives to existing development patterns. The conversion of agricultural land into urban land uses in the urbanization processes has become a serious issue for sustainable urban development in the developing countries. Take China as an example, it cannot a ord to lose a signi® cant amount of its valuable agricultural land because it has a huge growing population to feed. Unfortunately, in recent years, a large amount of such land have been unnecessarily lost and the forms of existing urban development cannot help to sustain its further development (Yeh and Li 1997, Yeh and Li 1998). The complete depletion of agricultural land resources would not be far away in some fast growing areas if such development trends continued. The main issue of sustainable urban development is to search for better urban forms that can help to sustain development, especially the minimization of unnecessary agricultural land loss. Four operational criteria for sustainable urban forms can be used: (1 ) not to convert too much agricultural land at the early stages of development; (2 ) to decide the amount of land consumption based on available land resources and population growth; (3 ) to guide urban development to sites which are less important for food production; and (4 ) to maintain compact development patterns. The objective of this research is to develop an operational CA model for Modelling sustainable urban development 133 sustainable urban development. A number of advantages have been identi® ed in the application of CA in urban simulation (Wolfram 1984, Itami 1988). Cellular automata are seen not only as a framework for dynamic spatial modelling but as a paradigm for thinking about complex spatial-temporal phenomena and an experimental laboratory for testing ideas (Itami 1994 ). Formally, standard cellular automata may be generalised as follows: St+1 = f (St, N ) (1 ) where S is a set of all possible states of the cellular automata, N is a neighbourhood of all cells providing input values for the function f, and f is a transition function that de® nes the change of the state from t to t+1. Standard cellular automata apply a b̀ottom-up’ approach. The approach argues that local rules can create complex patterns by running the models in iterations. It is central to the idea that cities should work from particular to general, and that they should seek to understand the small scale in order to understand the large (Batty and Xie 1994a). It is amazing to see that real urban systems can be modelled based on microscopic behaviour that may be the CA model’s most useful advantage . However, the t̀op-down’ critique nevertheless needs to be taken seriously. An example is that central governments have the power to control overall land development patterns and the amount of land consumption. With the implementations of sustainable elements into cellular automata, a new paradigm for thinking about urban planning emerges. It is possible to embed some constraints in the transition rules of cellular automata so that urban growth can be rationalised according to a set of pre-de® ned sustainable criteria. However, such experiments are very limited since many researchers just focus on the simulation of possible urban evolution and the understanding of growth mechanisms using CA techniques. The constrained cellular automata should be able to provide much better alternatives to actual development patterns. A good example is to produce a c̀ompact’ urban form using CA models. The need for sustainable cities is readily apparent in recent years. A particular issue is to seek the most suitable form for sustainable urban development. The growing spread of urban areas accelerating at an alarming rate in the last few decades re ̄ ects the dramatic pressure of human development on nature. The steady rise in urban areas and decline in agricultural land have led to the worsening of food production and other environmental problems. Urban development towards a compact form has been proposed as a means to alleviate the increasingly intensi® ed land use con ̄ icts. The morphology of a city is an important feature in the c̀ompact city theory’ (Jenks et al. 1996). Evidence indicates a strong link between urban form and sustainable development, although it is not simple and straightforward. Compact urban form can be a major means in guiding urban development to sustainability, especially in reducing the negative e ects of the present dispersed development in Western cities. However, one of the frequent problems in the compact city debate is the lack of proper tools to ensure successful implementation of the compact city because of its complexity (Burton et al. 1996). This study demonstrates that the constrained CA can be used to model compact cities and sustainable urban forms based on local, regional and global constraints. 3. Suitability and constraints for sustainable urban forms using CA In this constrained CA model, there are three important aspects of sustainable urban forms that need to be consideredÐ compact patterns, land q",
"title": ""
},
{
"docid": "neg:1840449_18",
"text": "Deep evolutionary network structured representation (DENSER) is a novel evolutionary approach for the automatic generation of deep neural networks (DNNs) which combines the principles of genetic algorithms (GAs) with those of dynamic structured grammatical evolution (DSGE). The GA-level encodes the macro structure of evolution, i.e., the layers, learning, and/or data augmentation methods (among others); the DSGE-level specifies the parameters of each GA evolutionary unit and the valid range of the parameters. The use of a grammar makes DENSER a general purpose framework for generating DNNs: one just needs to adapt the grammar to be able to deal with different network and layer types, problems, or even to change the range of the parameters. DENSER is tested on the automatic generation of convolutional neural networks (CNNs) for the CIFAR-10 dataset, with the best performing networks reaching accuracies of up to 95.22%. Furthermore, we take the fittest networks evolved on the CIFAR-10, and apply them to classify MNIST, Fashion-MNIST, SVHN, Rectangles, and CIFAR-100. The results show that the DNNs discovered by DENSER during evolution generalise, are robust, and scale. The most impressive result is the 78.75% classification accuracy on the CIFAR-100 dataset, which, to the best of our knowledge, sets a new state-of-the-art on methods that seek to automatically design CNNs.",
"title": ""
},
{
"docid": "neg:1840449_19",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
}
] |
1840450 | Twitter-Based User Modeling for News Recommendations | [
{
"docid": "pos:1840450_0",
"text": "We propose and evaluate a probabilistic framework for estimating a Twitter user's city-level location based purely on the content of the user's tweets, even in the absence of any other geospatial cues. By augmenting the massive human-powered sensing capabilities of Twitter and related microblogging services with content-derived location information, this framework can overcome the sparsity of geo-enabled features in these services and enable new location-based personalized information services, the targeting of regional advertisements, and so on. Three of the key features of the proposed approach are: (i) its reliance purely on tweet content, meaning no need for user IP information, private login information, or external knowledge bases; (ii) a classification component for automatically identifying words in tweets with a strong local geo-scope; and (iii) a lattice-based neighborhood smoothing model for refining a user's location estimate. The system estimates k possible locations for each user in descending order of confidence. On average we find that the location estimates converge quickly (needing just 100s of tweets), placing 51% of Twitter users within 100 miles of their actual location.",
"title": ""
},
{
"docid": "pos:1840450_1",
"text": "Recently the world of the web has become more social and more real-time. Facebook and Twitter are perhaps the exemplars of a new generation of social, real-time web services and we believe these types of service provide a fertile ground for recommender systems research. In this paper we focus on one of the key features of the social web, namely the creation of relationships between users. Like recent research, we view this as an important recommendation problem -- for a given user, UT which other users might be recommended as followers/followees -- but unlike other researchers we attempt to harness the real-time web as the basis for profiling and recommendation. To this end we evaluate a range of different profiling and recommendation strategies, based on a large dataset of Twitter users and their tweets, to demonstrate the potential for effective and efficient followee recommendation.",
"title": ""
}
] | [
{
"docid": "neg:1840450_0",
"text": "Automated discovery of early visual concepts from raw image data is a major open challenge in AI research. Addressing this problem, we propose an unsupervised approach for learning disentangled representations of the underlying factors of variation. We draw inspiration from neuroscience, and show how this can be achieved in an unsupervised generative model by applying the same learning pressures as have been suggested to act in the ventral visual stream in the brain. By enforcing redundancy reduction, encouraging statistical independence, and exposure to data with transform continuities analogous to those to which human infants are exposed, we obtain a variational autoencoder (VAE) framework capable of learning disentangled factors. Our approach makes few assumptions and works well across a wide variety of datasets. Furthermore, our solution has useful emergent properties, such as zero-shot inference and an intuitive understanding of “objectness”.",
"title": ""
},
{
"docid": "neg:1840450_1",
"text": "The whole world is changed rapidly and using the current technologies Internet becomes an essential need for everyone. Web is used in every field. Most of the people use web for a common purpose like online shopping, chatting etc. During an online shopping large number of reviews/opinions are given by the users that reflect whether the product is good or bad. These reviews need to be explored, analyse and organized for better decision making. Opinion Mining is a natural language processing task that deals with finding orientation of opinion in a piece of text with respect to a topic. In this paper a document based opinion mining system is proposed that classify the documents as positive, negative and neutral. Negation is also handled in the proposed system. Experimental results using reviews of movies show the effectiveness of the system.",
"title": ""
},
{
"docid": "neg:1840450_2",
"text": "A widely cited 1993 Computer article described failures in a software-controlled radiation machine that massively overdosed six people in the late 1980s, resulting in serious injury and fatalities. How far have safety-critical systems come since then?",
"title": ""
},
{
"docid": "neg:1840450_3",
"text": "The successes of deep learning in recent years has been fueled by the development of innovative new neural network architectures. However, the design of a neural network architecture remains a difficult problem, requiring significant human expertise as well as computational resources. In this paper, we propose a method for transforming a discrete neural network architecture space into a continuous and differentiable form, which enables the use of standard gradient-based optimization techniques for this problem, and allows us to learn the architecture and the parameters simultaneously. We evaluate our methods on the Udacity steering angle prediction dataset, and show that our method can discover architectures with similar or better predictive accuracy but significantly fewer parameters and smaller computational cost.",
"title": ""
},
{
"docid": "neg:1840450_4",
"text": "We review the task of Sentence Pair Scoring, popular in the literature in various forms — viewed as Answer Sentence Selection, Semantic Text Scoring, Next Utterance Ranking, Recognizing Textual Entailment, Paraphrasing or e.g. a component of Memory Networks. We argue that all such tasks are similar from the model perspective and propose new baselines by comparing the performance of common IR metrics and popular convolutional, recurrent and attentionbased neural models across many Sentence Pair Scoring tasks and datasets. We discuss the problem of evaluating randomized models, propose a statistically grounded methodology, and attempt to improve comparisons by releasing new datasets that are much harder than some of the currently used well explored benchmarks. We introduce a unified open source software framework with easily pluggable models and tasks, which enables us to experiment with multi-task reusability of trained sentence models.",
"title": ""
},
{
"docid": "neg:1840450_5",
"text": "Schottky junctions have been realized by evaporating gold spots on top of sexithiophen (6T), which is deposited on TiO 2 or ZnO with e-beam and spray pyrolysis. Using Mott-Schottky analysis of 6T/TiO2 and 6T/ZnO devices acceptor densities of 4.5x10(16) and 3.7x10(16) cm(-3) are obtained, respectively. For 6T/TiO2 deposited with the e-beam evaporation a conductivity of 9x10(-8) S cm(-1) and a charge carrier mobility of 1.2x10(-5) cm2/V s is found. Impedance spectroscopy is used to model the sample response in detail in terms of resistances and capacitances. An equivalent circuit is derived from the impedance measurements. The high-frequency data are analyzed in terms of the space-charge capacitance. In these frequencies shallow acceptor states dominate the heterojunction time constant. The high-frequency RC time constant is 8 micros. Deep acceptor states are represented by a resistance and a CPE connected in series. The equivalent circuit is validated in the potential range (from -1.2 to 0.8 V) for 6T/ZnO obtained with spray pyrolysis.",
"title": ""
},
{
"docid": "neg:1840450_6",
"text": "The more the telecom services marketing paradigm evolves, the more important it becomes to retain high value customers. Traditional customer segmentation methods based on experience or ARPU (Average Revenue per User) consider neither customers’ future revenue nor the cost of servicing customers of different types. Therefore, it is very difficult to effectively identify high-value customers. In this paper, we propose a novel customer segmentation method based on customer lifecycle, which includes five decision models, i.e. current value, historic value, prediction of long-term value, credit and loyalty. Due to the difficulty of quantitative computation of long-term value, credit and loyalty, a decision tree method is used to extract important parameters related to long-term value, credit and loyalty. Then a judgments matrix formulated on the basis of characteristics of data and the experience of business experts is presented. Finally a simple and practical customer value evaluation system is built. This model is applied to telecom operators in a province in China and good accuracy is achieved. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840450_7",
"text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.",
"title": ""
},
{
"docid": "neg:1840450_8",
"text": "Syntax definitions are pervasive in modern software systems, and serve as the basis for language processing tools like parsers and compilers. Mainstream parser generators pose restrictions on syntax definitions that follow from their implementation algorithm. They hamper evolution, maintainability, and compositionality of syntax definitions. The pureness and declarativity of syntax definitions is lost. We analyze how these problems arise for different aspects of syntax definitions, discuss their consequences for language engineers, and show how the pure and declarative nature of syntax definitions can be regained.",
"title": ""
},
{
"docid": "neg:1840450_9",
"text": "Queries are the foundations of data intensive applications. In model-driven software engineering (MDE), model queries are core technologies of tools and transformations. As software models are rapidly increasing in size and complexity, traditional tools exhibit scalability issues that decrease productivity and increase costs [17]. While scalability is a hot topic in the database community and recent NoSQL efforts have partially addressed many shortcomings, this happened at the cost of sacrificing the ad-hoc query capabilities of SQL. Unfortunately, this is a critical problem for MDE applications due to their inherent workload complexity. In this paper, we aim to address both the scalability and ad-hoc querying challenges by adapting incremental graph search techniques – known from the EMF-IncQuery framework – to a distributed cloud infrastructure. We propose a novel architecture for distributed and incremental queries, and conduct experiments to demonstrate that IncQuery-D, our prototype system, can scale up from a single workstation to a cluster that can handle very large models and complex incremental queries efficiently.",
"title": ""
},
{
"docid": "neg:1840450_10",
"text": "In this paper, statics model of an underactuated wire-driven flexible robotic arm is introduced. The robotic arm is composed of a serpentine backbone and a set of controlling wires. It has decoupled bending rigidity and axial rigidity, which enables the robot large axial payload capacity. Statics model of the robotic arm is developed using the Newton-Euler method. Combined with the kinematics model, the robotic arm deformation as well as the wire motion needed to control the robotic arm can be obtained. The model is validated by experiments. Results show that, the proposed model can well predict the robotic arm bending curve. Also, the bending curve is not affected by the wire pre-tension. This enables the wire-driven robotic arm with potential applications in minimally invasive surgical operations.",
"title": ""
},
{
"docid": "neg:1840450_11",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "neg:1840450_12",
"text": "Crowdfunding is an alternative model for project financing, whereby a large and dispersed audience participates through relatively small financial contributions, in exchange for physical, financial or social rewards. It is usually done via Internet-based platforms that act as a bridge between the crowd and the projects. Over the past few years, academics have explored this topic, both empirically and theoretically. However, the mixed findings and array of theories used have come to warrant a critical review of past works. To this end, we perform a systematic review of the literature on crowdfunding and seek to extract (1) the key management theories that have been applied in the context of crowdfunding and how these have been extended, and (2) the principal factors contributing to success for the different crowdfunding models, where success entails both fundraising and timely repayment. In the process, we offer a comprehensive definition of crowdfunding and identify avenues for future research based on the gaps and conflicting results in the literature.",
"title": ""
},
{
"docid": "neg:1840450_13",
"text": "Analyzing large volumes of log events without some kind of classification is undoable nowadays due to the large amount of events. Using AI to classify events make these log events usable again. With the use of the Keras Deep Learning API, which supports many Optimizing Stochastic Gradient Decent algorithms, better known as optimizers, this research project tried these algorithms in a Long Short-Term Memory (LSTM) network, which is a variant of the Recurrent Neural Networks. These algorithms have been applied to classify and update event data stored in Elastic-Search. The LSTM network consists of five layers where the output layer is a Dense layer using the Softmax function for evaluating the AI model and making the predictions. The Categorical Cross-Entropy is the algorithm used to calculate the loss. For the same AI model, different optimizers have been used to measure the accuracy and the loss. Adam was found as the best choice with an accuracy of 29,8%.",
"title": ""
},
{
"docid": "neg:1840450_14",
"text": "A girl of 4 years and 5 months of age was admitted as an outpatient to perform ultrasound because of painless vaginal bleeding for 2–3 days. First, on transabdominal US, the kidneys, bladder, and uterus appeared normal. A translabial perineal approach revealed a hyperaemic mass (Fig. 1a, b). One day after, MRI was performed to exclude a possible aggressive lesion. MRI revealed the mass to be in apparent continuity with the urethra (Fig. 1c). Inspection under anaesthesia revealed a prolapsed urethra (Fig. 2): a doughnut of red and purple tissue surrounded the urethra, obscuring the hymeneal orifice. The patient was treated by undergoing a resection of the prolapsed mucosa and reocclusion of the mucous membrane around a Foley catheter. Three months later we saw: normal external genitalia, normal external urethral meatus, and no mucosal ectropion. We did not see any pathological secretions.",
"title": ""
},
{
"docid": "neg:1840450_15",
"text": "An ongoing challenge in electrical engineering is the design of antennas whose size is small compared to the broadcast wavelength λ. One difficulty is that the radiation resistance of a small antenna is small compared to that of the typical transmission lines that feed the antenna, so that much of the power in the feed line is reflected off the antenna rather than radiated unless a matching network is used at the antenna terminals (with a large inductance for a small dipole antenna and a large capacitance for a small loop antenna). The radiation resistance of an antenna that emits dipole radiation is proportional to the square of the peak (electric or magnetic) dipole moment of the antenna. This dipole moment is roughly the product of the peak charge times the length of the antenna in the case of a linear (electric) antenna, and is the product of the peak current times the area of the antenna in the case of a loop (magnetic) antenna. Hence, it is hard to increase the radiation resistance of small linear or loop antennas by altering their shapes. One suggestion for a small antenna is the so-called “crossed-field” antenna [2]. Its proponents are not very explicit as to the design of this antenna, so this problem is based on a conjecture as to its motivation. It is well known that in the far zone of a dipole antenna the electric and magnetic fields have equal magnitudes (in Gaussian units), and their directions are at right angles to each other and to the direction of propagation of the radiation. Furthermore, the far zone electric and magnetic fields are in phase. The argument is, I believe, that it is desirable if these conditions could also be met in the near zone of the antenna. The proponents appear to argue that in the near zone the magnetic field B is in phase with the current in a simple, small antenna, while the electric field E is in phase with the charge, but the charge and current have a 90◦ phase difference. Hence, they imply, the electric and magnetic fields are 90◦ out of phase in the near zone, so that the radiation (which is proportional to E× B) is weak. The concept of the “crossed-field” antenna seems to be based on the use of two small antennas driven 90◦ out of phase. The expectation is that the electric field of one of the A center-fed linear dipole antenna of total length l λ has radiation resistance Rlinear = (l/λ) 197 Ω, while a circular loop antenna of diameter d λ has Rloop = (d/λ) 1948 Ω. For example, if l = d = 0.1λ then Rlinear = 2 Ω and Rloop = 0.2 Ω. That there is little advantage to so-called small fractal antennas is explored in [1]. A variant based on combining a small electric dipole antenna with a small magnetic dipole (loop) antenna has been proposed by [3].",
"title": ""
},
{
"docid": "neg:1840450_16",
"text": "This paper discusses the presence of steady-state limit cycles in digitally controlled pulse-width modulation (PWM) converters, and suggests conditions on the control law and the quantization resolution for their elimination. It then introduces single-phase and multi-phase controlled digital dither as a means of increasing the effective resolution of digital PWM (DPWM) modules, allowing for the use of low resolution DPWM units in high regulation accuracy applications. Bounds on the number of bits of dither that can be used in a particular converter are derived.",
"title": ""
},
{
"docid": "neg:1840450_17",
"text": "Faceted browsing is widely used in Web shops and product comparison sites. In these cases, a fixed ordered list of facets is often employed. This approach suffers from two main issues. First, one needs to invest a significant amount of time to devise an effective list. Second, with a fixed list of facets, it can happen that a facet becomes useless if all products that match the query are associated to that particular facet. In this work, we present a framework for dynamic facet ordering in e-commerce. Based on measures for specificity and dispersion of facet values, the fully automated algorithm ranks those properties and facets on top that lead to a quick drill-down for any possible target product. In contrast to existing solutions, the framework addresses e-commerce specific aspects, such as the possibility of multiple clicks, the grouping of facets by their corresponding properties, and the abundance of numeric facets. In a large-scale simulation and user study, our approach was, in general, favorably compared to a facet list created by domain experts, a greedy approach as baseline, and a state-of-the-art entropy-based solution.",
"title": ""
},
{
"docid": "neg:1840450_18",
"text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.",
"title": ""
},
{
"docid": "neg:1840450_19",
"text": "In this work we formulate the problem of image captioning as a multimodal translation task. Analogous to machine translation, we present a sequence-to-sequence recurrent neural networks (RNN) model for image caption generation. Different from most existing work where the whole image is represented by convolutional neural network (CNN) feature, we propose to represent the input image as a sequence of detected objects which feeds as the source sequence of the RNN model. In this way, the sequential representation of an image can be naturally translated to a sequence of words, as the target sequence of the RNN model. To represent the image in a sequential way, we extract the objects features in the image and arrange them in a order using convolutional neural networks. To further leverage the visual information from the encoded objects, a sequential attention layer is introduced to selectively attend to the objects that are related to generate corresponding words in the sentences. Extensive experiments are conducted to validate the proposed approach on popular benchmark dataset, i.e., MS COCO, and the proposed model surpasses the state-of-the-art methods in all metrics following the dataset splits of previous work. The proposed approach is also evaluated by the evaluation server of MS COCO captioning challenge, and achieves very competitive results, e.g., a CIDEr of 1.029 (c5) and 1.064 (c40).",
"title": ""
}
] |
1840451 | A Single-Phase Photovoltaic Inverter Topology With a Series-Connected Energy Buffer | [
{
"docid": "pos:1840451_0",
"text": "Flyback converters show the characteristics of current source when operating in discontinuous conduction mode (DCM) and boundary conduction mode (BCM), which makes it widely used in photovoltaic grid-connected micro-inverter. In this paper, an active clamp interleaved flyback converter operating with combination of DCM and BCM is proposed in micro-inverter to achieve zero voltage switching (ZVS) for both of primary switches and fully recycle the energy in the leakage inductance. The proposed control method makes active-clamping part include only one clamp capacitor. In DCM area, only one flyback converter operates and turn-off of its auxiliary switch is suggested here to reduce resonant conduction losses, which improve the efficiency at light loads. Performance of the proposed circuit is validated by the simulation results and experimental results.",
"title": ""
}
] | [
{
"docid": "neg:1840451_0",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "neg:1840451_1",
"text": "Image saliency detection has recently witnessed rapid progress due to deep convolutional neural networks. However, none of the existing methods is able to identify object instances in the detected salient regions. In this paper, we present a salient instance segmentation method that produces a saliency mask with distinct object instance labels for an input image. Our method consists of three steps, estimating saliency map, detecting salient object contours and identifying salient object instances. For the first two steps, we propose a multiscale saliency refinement network, which generates high-quality salient region masks and salient object contours. Once integrated with multiscale combinatorial grouping and a MAP-based subset optimization framework, our method can generate very promising salient object instance segmentation results. To promote further research and evaluation of salient instance segmentation, we also construct a new database of 1000 images and their pixelwise salient instance annotations. Experimental results demonstrate that our proposed method is capable of achieving state-of-the-art performance on all public benchmarks for salient region detection as well as on our new dataset for salient instance segmentation.",
"title": ""
},
{
"docid": "neg:1840451_2",
"text": "We present a framework to synthesize character movements based on high level parameters, such that the produced movements respect the manifold of human motion, trained on a large motion capture dataset. The learned motion manifold, which is represented by the hidden units of a convolutional autoencoder, represents motion data in sparse components which can be combined to produce a wide range of complex movements. To map from high level parameters to the motion manifold, we stack a deep feedforward neural network on top of the trained autoencoder. This network is trained to produce realistic motion sequences from parameters such as a curve over the terrain that the character should follow, or a target location for punching and kicking. The feedforward control network and the motion manifold are trained independently, allowing the user to easily switch between feedforward networks according to the desired interface, without re-training the motion manifold. Once motion is generated it can be edited by performing optimization in the space of the motion manifold. This allows for imposing kinematic constraints, or transforming the style of the motion, while ensuring the edited motion remains natural. As a result, the system can produce smooth, high quality motion sequences without any manual pre-processing of the training data.",
"title": ""
},
{
"docid": "neg:1840451_3",
"text": "Sixteen residents in long-term care with advanced dementia (14 women; average age = 88) showed significantly more constructive engagement (defined as motor or verbal behaviors in response to an activity), less passive engagement (defined as passively observing an activity), and more pleasure while participating in Montessori-based programming than in regularly scheduled activities programming. Principles of Montessori-based programming, along with examples of such programming, are presented. Implications of the study and methods for expanding the use of Montessori-based dementia programming are discussed.",
"title": ""
},
{
"docid": "neg:1840451_4",
"text": "Graph coloring—also known as vertex coloring—considers the problem of assigning colors to the nodes of a graph such that adjacent nodes do not share the same color. The optimization version of the problem concerns the minimization of the number of colors used. In this paper we deal with the problem of finding valid graphs colorings in a distributed way, that is, by means of an algorithm that only uses local information for deciding the color of the nodes. The algorithm proposed in this paper is inspired by the calling behavior of Japanese tree frogs. Male frogs use their calls to attract females. Interestingly, groups of males that are located near each other desynchronize their calls. This is because female frogs are only able to correctly localize male frogs when their calls are not too close in time. The proposed algorithm makes use of this desynchronization behavior for the assignment of different colors to neighboring nodes. We experimentally show that our algorithm is very competitive with the current state of the art, using different sets of problem instances and comparing to one of the most competitive algorithms from the literature.",
"title": ""
},
{
"docid": "neg:1840451_5",
"text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.",
"title": ""
},
{
"docid": "neg:1840451_6",
"text": "We present a framework for recognizing isolated and continuous American Sign Language (ASL) sentences from three-dimensional data. The data are obtained by using physics-based three-dimensional tracking methods and then presented as input to Hidden Markov Models (HMMs) for recognition. To improve recognition performance, we model context-dependent HMMs and present a novel method of coupling three-dimensional computer vision methods and HMMs by temporally segmenting the data stream with vision methods. We then use the geometric properties of the segments to constrain the HMM framework for recognition. We show in experiments with a 53 sign vocabulary that three-dimensional features outperform two-dimensional features in recognition performance. Furthermore, we demonstrate that contextdependent modeling and the coupling of vision methods and HMMs improve the accuracy of continuous ASL recognition.",
"title": ""
},
{
"docid": "neg:1840451_7",
"text": "Plant volatiles (PVs) are lipophilic molecules with high vapor pressure that serve various ecological roles. The synthesis of PVs involves the removal of hydrophilic moieties and oxidation/hydroxylation, reduction, methylation, and acylation reactions. Some PV biosynthetic enzymes produce multiple products from a single substrate or act on multiple substrates. Genes for PV biosynthesis evolve by duplication of genes that direct other aspects of plant metabolism; these duplicated genes then diverge from each other over time. Changes in the preferred substrate or resultant product of PV enzymes may occur through minimal changes of critical residues. Convergent evolution is often responsible for the ability of distally related species to synthesize the same volatile.",
"title": ""
},
{
"docid": "neg:1840451_8",
"text": "Information theoretic measures form a fundamental class of measures for comparing clusterings, and have recently received increasing interest. Neverthel ss, a number of questions concerning their properties and inter-relationships remain unresolv ed. In this paper, we perform an organized study of information theoretic measures for clustering com parison, including several existing popular measures in the literature, as well as some newly propos ed nes. We discuss and prove their important properties, such as the metric property and the no rmalization property. We then highlight to the clustering community the importance of correct ing information theoretic measures for chance, especially when the data size is small compared to th e number of clusters present therein. Of the available information theoretic based measures, we a dvocate the normalized information distance (NID) as a general measure of choice, for it possess e concurrently several important properties, such as being both a metric and a normalized meas ure, admitting an exact analytical adjusted-for-chance form, and using the nominal [0,1] range better than other normalized variants.",
"title": ""
},
{
"docid": "neg:1840451_9",
"text": "Edges are important features in an image since they represent significant local intensity changes. They provide important clues to separate regions within an object or to identify changes in illumination. point noise. The real problem is how to enhance noisy remote sensing images and simultaneously extract the edges. Using the implemented Canny edge detector for features extraction and as an enhancement tool for remote sensing images, the result was robust with a very high enhancement level.",
"title": ""
},
{
"docid": "neg:1840451_10",
"text": "A parsing algorithm visualizer is a tool that visualizes the construction of a parser for a given context-free grammar and then illustrates the use of that parser to parse a given string. Parsing algorithm visualizers are used to teach the course on compiler construction which in invariably included in all undergraduate computer science curricula. This paper presents a new parsing algorithm visualizer that can visualize six parsing algorithms, viz. predictive parsing, simple LR parsing, canonical LR parsing, look-ahead LR parsing, Earley parsing and CYK parsing. The tool logically explains the process of parsing showing the calculations involved in each step. The output of the tool has been structured to maximize the learning outcomes and contains important constructs like FIRST and FOLLOW sets, item sets, parsing table, parse tree and leftmost or rightmost derivation depending on the algorithm being visualized. The tool has been used to teach the course on compiler construction at both undergraduate and graduate levels. An overall positive feedback was received from the students with 89% of them saying that the tool helped them in understanding the parsing algorithms. The tool is capable of visualizing multiple parsing algorithms and 88% students used it to compare the algorithms.",
"title": ""
},
{
"docid": "neg:1840451_11",
"text": "Designing agile locomotion for quadruped robots often requires extensive expertise and tedious manual tuning. In this paper, we present a system to automate this process by leveraging deep reinforcement learning techniques. Our system can learn quadruped locomotion from scratch using simple reward signals. In addition, users can provide an open loop reference to guide the learning process when more control over the learned gait is needed. The control policies are learned in a physics simulator and then deployed on real robots. In robotics, policies trained in simulation often do not transfer to the real world. We narrow this reality gap by improving the physics simulator and learning robust policies. We improve the simulation using system identification, developing an accurate actuator model and simulating latency. We learn robust controllers by randomizing the physical environments, adding perturbations and designing a compact observation space. We evaluate our system on two agile locomotion gaits: trotting and galloping. After learning in simulation, a quadruped robot can successfully perform both gaits in the real world.",
"title": ""
},
{
"docid": "neg:1840451_12",
"text": "OBJECTIVE\nThe distinct trajectories of patients with autism spectrum disorders (ASDs) have not been extensively studied, particularly regarding clinical manifestations beyond the neurobehavioral criteria from the Diagnostic and Statistical Manual of Mental Disorders. The objective of this study was to investigate the patterns of co-occurrence of medical comorbidities in ASDs.\n\n\nMETHODS\nInternational Classification of Diseases, Ninth Revision codes from patients aged at least 15 years and a diagnosis of ASD were obtained from electronic medical records. These codes were aggregated by using phenotype-wide association studies categories and processed into 1350-dimensional vectors describing the counts of the most common categories in 6-month blocks between the ages of 0 to 15. Hierarchical clustering was used to identify subgroups with distinct courses.\n\n\nRESULTS\nFour subgroups were identified. The first was characterized by seizures (n = 120, subgroup prevalence 77.5%). The second (n = 197) was characterized by multisystem disorders including gastrointestinal disorders (prevalence 24.3%) and auditory disorders and infections (prevalence 87.8%), and the third was characterized by psychiatric disorders (n = 212, prevalence 33.0%). The last group (n = 4316) could not be further resolved. The prevalence of psychiatric disorders was uncorrelated with seizure activity (P = .17), but a significant correlation existed between gastrointestinal disorders and seizures (P < .001). The correlation results were replicated by using a second sample of 496 individuals from a different geographic region.\n\n\nCONCLUSIONS\nThree distinct patterns of medical trajectories were identified by unsupervised clustering of electronic health record diagnoses. These may point to distinct etiologies with different genetic and environmental contributions. Additional clinical and molecular characterizations will be required to further delineate these subgroups.",
"title": ""
},
{
"docid": "neg:1840451_13",
"text": "The conventional sigma-delta (SigmaDelta) modulator structures used in telecommunication and audio applications usually cannot satisfy the requirements of signal processing applications for converting the wideband signals into digital samples accurately. In this paper, system design, analytical aspects and optimization methods of a third order incremental sigma-delta (SigmaDelta) modulator will be discussed and finally the designed modulator will be implemented by switched-capacitor circuits. The design of anti-aliasing filter has been integrated inside of modulator signal transfer function. It has been shown that the implemented 3rd order sigma-delta (SigmaDelta) modulator can be designed for the maximum SNR of 54 dB for minimum over- sampling ratio of 16. The modulator operating principles and its analysis in frequency domain and the topologies for its optimizing have been discussed elaborately. Simulation results on implemented modulator validate the system design and its main parameters such as stability and output dynamic range.",
"title": ""
},
{
"docid": "neg:1840451_14",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. The portion of the RF spectrum above 3GHz has largely been uxexploited for commercial mobile applications. In this paper, we reason why wireless community should start looking at 3–300GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages such as spectrum availability and small component sizes for mobile applications.",
"title": ""
},
{
"docid": "neg:1840451_15",
"text": "A number of surveillance scenarios require the detection and tracking of people. Although person detection and counting systems are commercially available today, there is need for further research to address the challenges of real world scenarios. The focus of this work is the segmentation of groups of people into individuals. One relevant application of this algorithm is people counting. Experiments document that the presented approach leads to robust people counts.",
"title": ""
},
{
"docid": "neg:1840451_16",
"text": "In recent years, convolutional neural networks (CNNs) based machine learning algorithms have been widely applied in computer vision applications. However, for large-scale CNNs, the computation-intensive, memory-intensive and resource-consuming features have brought many challenges to CNN implementations. This work proposes an end-to-end FPGA-based CNN accelerator with all the layers mapped on one chip so that different layers can work concurrently in a pipelined structure to increase the throughput. A methodology which can find the optimized parallelism strategy for each layer is proposed to achieve high throughput and high resource utilization. In addition, a batch-based computing method is implemented and applied on fully connected layers (FC layers) to increase the memory bandwidth utilization due to the memory-intensive feature. Further, by applying two different computing patterns on FC layers, the required on-chip buffers can be reduced significantly. As a case study, a state-of-the-art large-scale CNN, AlexNet, is implemented on Xilinx VC709. It can achieve a peak performance of 565.94 GOP/s and 391 FPS under 156MHz clock frequency which outperforms previous approaches.",
"title": ""
},
{
"docid": "neg:1840451_17",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "neg:1840451_18",
"text": "Software Defined Networking (SDN) has been proposed as a drastic shift in the networking paradigm, by decoupling network control from the data plane and making the switching infrastructure truly programmable. The key enabler of SDN, OpenFlow, has seen widespread deployment on production networks and its adoption is constantly increasing. Although openness and programmability are primary features of OpenFlow, security is of core importance for real-world deployment. In this work, we perform a security analysis of OpenFlow using STRIDE and attack tree modeling methods, and we evaluate our approach on an emulated network testbed. The evaluation assumes an attacker model with access to the network data plane. Finally, we propose appropriate counter-measures that can potentially mitigate the security issues associated with OpenFlow networks. Our analysis and evaluation approach are not exhaustive, but are intended to be adaptable and extensible to new versions and deployment contexts of OpenFlow.",
"title": ""
},
{
"docid": "neg:1840451_19",
"text": "As social media has become more integrated into peoples’ daily lives, its users have begun turning to it in times of distress. People use Twitter, Facebook, YouTube, and other social media platforms to broadcast their needs, propagate rumors and news, and stay abreast of evolving crisis situations. Disaster relief organizations have begun to craft their efforts around pulling data about where aid is needed from social media and broadcasting their own needs and perceptions of the situation. They have begun deploying new software platforms to better analyze incoming data from social media, as well as to deploy new technologies to specifically harvest messages from disaster situations.",
"title": ""
}
] |
1840452 | Scene Flow Estimation: A Survey | [
{
"docid": "pos:1840452_0",
"text": "The novel concept of total generalized variation of a function u is introduced and some of its essential properties are proved. Differently from the bounded variation semi-norm, the new concept involves higher order derivatives of u. Numerical examples illustrate the high quality of this functional as a regularization term for mathematical imaging problems. In particular this functional selectively regularizes on different regularity levels and does not lead to a staircasing effect.",
"title": ""
}
] | [
{
"docid": "neg:1840452_0",
"text": "The realisation of domain-speci®c languages (DSLs) diers in fundamental ways from that of traditional programming languages. We describe eight recurring patterns that we have identi®ed as being used for DSL design and implementation. Existing languages can be extended, restricted, partially used, or become hosts for DSLs. Simple DSLs can be implemented by lexical processing. In addition, DSLs can be used to create front-ends to existing systems or to express complicated data structures. Finally, DSLs can be combined using process pipelines. The patterns described form a pattern language that can be used as a building block for a systematic view of the software development process involving DSLs. Ó 2001 Elsevier Science Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840452_1",
"text": "Dynamically Reconfigurable Systems (DRS), implemented using Field-Programmable Gate Arrays (FPGAs), allow hardware logic to be partially reconfigured while the rest of a design continues to operate. By mapping multiple reconfigurable hardware modules to the same physical region of an FPGA, such systems are able to time-multiplex their circuits at run time and can adapt to changing execution requirements. This architectural flexibility introduces challenges for verifying system functionality. New simulation approaches need to extend traditional simulation techniques to assist designers in testing and debugging the time-varying behavior of DRS. Another significant challenge is the effective use of tools so as to reduce the number of design iterations. This thesis focuses on simulation-based functional verification of modular reconfigurable DRS designs. We propose a methodology and provide tools to assist designers in verifying DRS designs while part of the design is undergoing reconfiguration. This thesis analyzes the challenges in verifying DRS designs with respect to the user design and the physical implementation of such systems. We propose using a simulationonly layer to emulate the behavior of target FPGAs and accurately model the characteristic features of reconfiguration. The simulation-only layer maintains verification productivity by abstracting away the physical details of the FPGA fabric. Furthermore, since the design does not need to be modified for simulation purposes, the design as implemented instead of some variation of it is verified. We provide two possible implementations of the simulation-only layer. Extended ReChannel is a SystemC library that can be used to model DRS at a high level. ReSim is a library to support RTL simulation of a DRS reconfiguring both its logic and state. Through a number of case studies, we demonstrate that with insignificant overheads, our approach seamlessly integrates with the existing, mainstream DRS design flow and with wellestablished verification methodologies such as top-down modeling and coverage-driven verification. The case studies also serve as a guide in the use of our libraries to identify bugs that are related to Dynamic Partial Reconfiguration. Our results demonstrate that using the simulation-only layer is an effective approach to the simulation-based functional verification of DRS designs.",
"title": ""
},
{
"docid": "neg:1840452_2",
"text": "Issue No. 2, Fall 2002 www.spacejournal.org Page 1 of 29 A Prediction Model that Combines Rain Attenuation and Other Propagation Impairments Along EarthSatellite Paths Asoka Dissanayake, Jeremy Allnutt, Fatim Haidara Abstract The rapid growth of satellite services using higher frequency bands such as the Ka-band has highlighted a need for estimating the combined effect of different propagation impairments. Many projected Ka-band services will use very small terminals and, for some, rain effects may only form a relatively small part of the total propagation link margin. It is therefore necessary to identify and predict the overall impact of every significant attenuating effect along any given path. A procedure for predicting the combined effect of rain attenuation and several other propagation impairments along earth-satellite paths is presented. Where accurate model exist for some phenomena, these have been incorporated into the prediction procedure. New models were developed, however, for rain attenuation, cloud attenuation, and low-angle fading to provide more overall accuracy, particularly at very low elevation angles (<10°). In the absence of a detailed knowledge of the occurrence probabilities of different impairments, an empirical approach is taken in estimating their combined effects. An evaluation of the procedure is made using slant-path attenuation data that have been collected with simultaneous beacon and radiometer measurements which allow a near complete account of different impairments. Results indicate that the rain attenuation element of the model provides the best average accuracy globally between 10 and 30 GHz and that the combined procedure gives prediction accuracies comparable to uncertainties associated with the year-to-year variability of path attenuation.",
"title": ""
},
{
"docid": "neg:1840452_3",
"text": "Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227 × 227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models Plug and Play Generative Networks. PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable condition network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.",
"title": ""
},
{
"docid": "neg:1840452_4",
"text": "Construction sites are dynamic and complicated systems. The movement and interaction of people, goods and energy make construction safety management extremely difficult. Due to the ever-increasing amount of information, traditional construction safety management has operated under difficult circumstances. As an effective way to collect, identify and process information, sensor-based technology is deemed to provide new generation of methods for advancing construction safety management. It makes the real-time construction safety management with high efficiency and accuracy a reality and provides a solid foundation for facilitating its modernization, and informatization. Nowadays, various sensor-based technologies have been adopted for construction safety management, including locating sensor-based technology, vision-based sensing and wireless sensor networks. This paper provides a systematic and comprehensive review of previous studies in this field to acknowledge useful findings, identify the research gaps and point out future research directions.",
"title": ""
},
{
"docid": "neg:1840452_5",
"text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.",
"title": ""
},
{
"docid": "neg:1840452_6",
"text": "We present probabilistic neural programs, a framework for program induction that permits flexible specification of both a computational model and inference algorithm while simultaneously enabling the use of deep neural networks. Probabilistic neural programs combine a computation graph for specifying a neural network with an operator for weighted nondeterministic choice. Thus, a program describes both a collection of decisions as well as the neural network architecture used to make each one. We evaluate our approach on a challenging diagram question answering task where probabilistic neural programs correctly execute nearly twice as many programs as a baseline model.",
"title": ""
},
{
"docid": "neg:1840452_7",
"text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.",
"title": ""
},
{
"docid": "neg:1840452_8",
"text": "PURPOSE\nTo evaluate the potential of third-harmonic generation (THG) microscopy combined with second-harmonic generation (SHG) and two-photon excited fluorescence (2PEF) microscopies for visualizing the microstructure of the human cornea and trabecular meshwork based on their intrinsic nonlinear properties.\n\n\nMETHODS\nFresh human corneal buttons and corneoscleral discs from an eye bank were observed under a multiphoton microscope incorporating a titanium-sapphire laser and an optical parametric oscillator for the excitation, and equipped with detection channels in the forward and backward directions.\n\n\nRESULTS\nOriginal contrast mechanisms of THG signals in cornea with physiological relevance were elucidated. THG microscopy with circular incident polarization detected microscopic anisotropy and revealed the stacking and distribution of stromal collagen lamellae. THG imaging with linear incident polarization also revealed cellular and anchoring structures with micrometer resolution. In edematous tissue, a strong THG signal around cells indicated the local presence of water. Additionally, SHG signals reflected the distribution of fibrillar collagen, and 2PEF imaging revealed the elastic component of the trabecular meshwork and the fluorescence of metabolically active cells.\n\n\nCONCLUSIONS\nThe combined imaging modalities of THG, SHG, and 2PEF provide key information about the physiological state and microstructure of the anterior segment over its entire thickness with remarkable contrast and specificity. This imaging method should prove particularly useful for assessing glaucoma and corneal physiopathologies.",
"title": ""
},
{
"docid": "neg:1840452_9",
"text": "Plastic debris is known to undergo fragmentation at sea, which leads to the formation of microscopic particles of plastic; the so called 'microplastics'. Due to their buoyant and persistent properties, these microplastics have the potential to become widely dispersed in the marine environment through hydrodynamic processes and ocean currents. In this study, the occurrence and distribution of microplastics was investigated in Belgian marine sediments from different locations (coastal harbours, beaches and sublittoral areas). Particles were found in large numbers in all samples, showing the wide distribution of microplastics in Belgian coastal waters. The highest concentrations were found in the harbours where total microplastic concentrations of up to 390 particles kg(-1) dry sediment were observed, which is 15-50 times higher than reported maximum concentrations of other, similar study areas. The depth profile of sediment cores suggested that microplastic concentrations on the beaches reflect the global plastic production increase.",
"title": ""
},
{
"docid": "neg:1840452_10",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "neg:1840452_11",
"text": "Brain Computer Interface (BCI) research advanced for more than forty years, providing a rich variety of sophisticated data analysis methods. Yet, most BCI studies have been restricted to the laboratory with controlled and undisturbed environment. BCI research was aiming at developing tools for communication and control. Recently, BCI research has broadened to explore novel applications for improved man-machine interaction. In the present study, we investigated the option to employ neurotechnology in an industrial environment for the psychophysiological optimization of working conditions in such settings. Our findings suggest that it is possible to use BCI-related analysis techniques to qualify responses of an operator by assessing the depth of cognitive processing on the basis of neuronal correlates of behaviourally relevant measures. This could lead to assistive technologies helping to avoid accidents in working environments by designing a collaborative workspace in which the environment takes into account the actual cognitive mental state of the operator.",
"title": ""
},
{
"docid": "neg:1840452_12",
"text": "The Epstein-Barr virus (EBV) is associated with a broad spectrum of diseases, mainly because of its genomic characteristics, which result in different latency patterns in immune cells and infective mechanisms. The patient described in this report is a previously healthy young man who presented to the emergency department with clinical features consistent with meningitis and genital ulcers, which raised concern that the herpes simplex virus was the causative agent. However, the polymerase chain reaction of cerebral spinal fluid was positive for EBV. The authors highlight the importance of this infection among the differential diagnosis of central nervous system involvement and genital ulceration.",
"title": ""
},
{
"docid": "neg:1840452_13",
"text": "Impact of occupational stress on employee performance has been recognized as an important area of concern for organizations. Negative stress affects the physical and mental health of the employees that in turn affects their performance on job. Research into the relationship between stress and job performance has been neglected in the occupational stress literature (Jex, 1998). It is therefore significant to understand different Occupational Stress Inducers (OSI) on one hand and their impact on different aspects of job performance on the other. This article reviews the available literature to understand the phenomenon so as to develop appropriate stress management strategies to not only save the employees from variety of health problems but to improve their performance and the performance of the organization. 35 Occupational Stress Inducers (OSI) were identified through a comprehensive review of articles and reports published in the literature of management and allied disciplines between 1990 and 2014. A conceptual model is proposed towards the end to study the impact of stress on employee job performance. The possible data analysis techniques are also suggested providing direction for future research.",
"title": ""
},
{
"docid": "neg:1840452_14",
"text": "We introduce a method to generate whole body motion of a humanoid robot such that the resulted total linear/angular momenta become specified values. First, we derive a linear equation which gives the total momentum of a robot from its physical parameters, the base link speed and the joint speeds. Constraints between the legs and the environment are also considered. The whole body motion is calculated from a given momentum reference by using a pseudo-inverse of the inertia matrix. As examples, we generated the kicking and walking motions and tested on the actual humanoid robot HRP-2. This method, the Resolved Momentum Control, gives us a unified framework to generate various maneuver of humanoid robots.",
"title": ""
},
{
"docid": "neg:1840452_15",
"text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.",
"title": ""
},
{
"docid": "neg:1840452_16",
"text": "Many real-time tasks, such as human-computer interaction, require fast and efficient facial gender classification. Although deep CNN nets have been very effective for a multitude of classification tasks, their high space and time demands make them impractical for personal computers and mobile devices without a powerful GPU. In this paper, we develop a 16-layer, yet lightweight, neural network which boosts efficiency while maintaining high accuracy. Our net is pruned from the VGG-16 model starting from the last convolutional (conv) layer where we find neuron activations are highly uncorrelated given the gender. Through Fisher's Linear Discriminant Analysis (LDA), we show that this high decorrelation makes it safe to discard directly last conv layer neurons with high within-class variance and low between-class variance. Combined with either Support Vector Machines (SVM) or Bayesian classification, the reduced CNNs are capable of achieving comparable (or even higher) accuracies on the LFW and CelebA datasets than the original net with fully connected layers. On LFW, only four Conv5_3 neurons are able to maintain a comparably high recognition accuracy, which results in a reduction of total network size by a factor of 70X with a 11 fold speedup. Comparisons with a state-of-the-art pruning method (as well as two smaller nets) in terms of accuracy loss and convolutional layers pruning rate are also provided.",
"title": ""
},
{
"docid": "neg:1840452_17",
"text": "We present a photo-realistic training and evaluation simulator (Sim4CV) (http://www.sim4cv.org) with extensive applications across various fields of computer vision. Built on top of the Unreal Engine, the simulator integrates full featured physics based cars, unmanned aerial vehicles (UAVs), and animated human actors in diverse urban and suburban 3D environments. We demonstrate the versatility of the simulator with two case studies: autonomous UAV-based tracking of moving objects and autonomous driving using supervised learning. The simulator fully integrates both several state-of-the-art tracking algorithms with a benchmark evaluation tool and a deep neural network architecture for training vehicles to drive autonomously. It generates synthetic photo-realistic datasets with automatic ground truth annotations to easily extend existing real-world datasets and provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool.",
"title": ""
},
{
"docid": "neg:1840452_18",
"text": "The Electrocardiogram (ECG) is commonly used to detect arrhythmias. Traditionally, a single ECG observation is used for diagnosis, making it difficult to detect irregular arrhythmias. Recent technology developments, however, have made it cost-effective to collect large amounts of raw ECG data over time. This promises to improve diagnosis accuracy, but the large data volume presents new challenges for cardiologists. This paper introduces ECGLens, an interactive system for arrhythmia detection and analysis using large-scale ECG data. Our system integrates an automatic heartbeat classification algorithm based on convolutional neural network, an outlier detection algorithm, and a set of rich interaction techniques. We also introduce A-glyph, a novel glyph designed to improve the readability and comparison of ECG signals. We report results from a comprehensive user study showing that A-glyph improves the efficiency in arrhythmia detection, and demonstrate the effectiveness of ECGLens in arrhythmia detection through two expert interviews.",
"title": ""
},
{
"docid": "neg:1840452_19",
"text": "Automatic generation control (AGC) regulates mechanical power generation in response to load changes through local measurements. Its main objective is to maintain system frequency and keep energy balanced within each control area in order to maintain the scheduled net interchanges between control areas. The scheduled interchanges as well as some other factors of AGC are determined at a slower time scale by considering a centralized economic dispatch (ED) problem among different generators. However, how to make AGC more economically efficient is less studied. In this paper, we study the connections between AGC and ED by reverse engineering AGC from an optimization view, and then we propose a distributed approach to slightly modify the conventional AGC to improve its economic efficiency by incorporating ED into the AGC automatically and dynamically.",
"title": ""
}
] |
1840453 | PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory | [
{
"docid": "pos:1840453_0",
"text": "Microprocessors and memory systems suffer from a growing gap in performance. We introduce Active Pages, a computation model which addresses this gap by shifting data-intensive computations to the memory system. An Active Page consists of a page of data and a set of associated functions which can operate upon that data. We describe an implementation of Active Pages on RADram (Reconfigurable Architecture DRAM), a memory system based upon the integration of DRAM and reconfigurable logic. Results from the SimpleScalar simulator [BA97] demonstrate up to 1000X speedups on several applications using the RADram system versus conventional memory systems. We also explore the sensitivity of our results to implementations in other memory technologies.",
"title": ""
}
] | [
{
"docid": "neg:1840453_0",
"text": "Building facade detection is an important problem in comput er vision, with applications in mobile robotics and semanti c scene understanding. In particular, mobile platform localizati on and guidance in urban environments can be enabled with acc urate models of the various building facades in a scene. Toward that end, w e present a system for detection, segmentation, and paramet er estimation of building facades in stereo imagery. The propo sed method incorporates multilevel appearance and dispari ty features in a binary discriminative model, and generates a set of cand id te planes by sampling and clustering points from the imag e with Random Sample Consensus (RANSAC), using local normal estim ates derived from Principal Component Analysis (PCA) to inf rm the planar models. These two models are incorporated into a t w -layer Markov Random Field (MRF): an appearanceand disp ar tybased discriminative classifier at the mid-level, and a geom etric model to segment the building pixels into facades at th e highlevel. By using object-specific stereo features, our discri minative classifier is able to achieve substantially higher accuracy than standard boosting or modeling with only appearance-based f eatures. Furthermore, the results of our MRF classification indicate a strong improvement in accuracy for the binary building dete ction problem and the labeled planar surface models provide a good approximation to the ground truth planes.",
"title": ""
},
{
"docid": "neg:1840453_1",
"text": "In this paper, we propose a knowledge-guided pose grammar network to tackle the problem of 3D human pose estimation. Our model directly takes 2D poses as inputs and learns the generalized 2D-3D mapping function, which renders high applicability. The proposed network consists of a base network which efficiently captures pose-aligned features and a hierarchy of Bidirectional RNNs on top of it to explicitly incorporate a set of knowledge (e.g., kinematics, symmetry, coordination) and thus enforce high-level constraints over human poses. In learning, we develop a pose-guided sample simulator to augment training samples in virtual camera views, which further improves the generalization ability of our model. We validate our method on public 3D human pose benchmarks and propose a new evaluation protocol working on cross-view setting to verify the generalization ability of different methods. We empirically observe that most state-ofthe-arts face difficulty under such setting while our method obtains superior performance.",
"title": ""
},
{
"docid": "neg:1840453_2",
"text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.",
"title": ""
},
{
"docid": "neg:1840453_3",
"text": "We introduce a new approach to intrinsic image decomposition, the task of decomposing a single image into albedo and shading components. Our strategy, which we term direct intrinsics, is to learn a convolutional neural network (CNN) that directly predicts output albedo and shading channels from an input RGB image patch. Direct intrinsics is a departure from classical techniques for intrinsic image decomposition, which typically rely on physically-motivated priors and graph-based inference algorithms. The large-scale synthetic ground-truth of the MPI Sintel dataset plays the key role in training direct intrinsics. We demonstrate results on both the synthetic images of Sintel and the real images of the classic MIT intrinsic image dataset. On Sintel, direct intrinsics, using only RGB input, outperforms all prior work, including methods that rely on RGB+Depth input. Direct intrinsics also generalizes across modalities, our Sintel-trained CNN produces quite reasonable decompositions on the real images of the MIT dataset. Our results indicate that the marriage of CNNs with synthetic training data may be a powerful new technique for tackling classic problems in computer vision.",
"title": ""
},
{
"docid": "neg:1840453_4",
"text": "Mass shootings are a particular problem in the United States, with one mass shooting occurring approximately every 12.5 days. Recently a \"contagion\" effect has been suggested wherein the occurrence of one mass shooting increases the likelihood of another mass shooting occurring in the near future. Although contagion is a convenient metaphor used to describe the temporal spread of a behavior, it does not explain how the behavior spreads. Generalized imitation is proposed as a better model to explain how one person's behavior can influence another person to engage in similar behavior. Here we provide an overview of generalized imitation and discuss how the way in which the media report a mass shooting can increase the likelihood of another shooting event. Also, we propose media reporting guidelines to minimize imitation and further decrease the likelihood of a mass shooting.",
"title": ""
},
{
"docid": "neg:1840453_5",
"text": "This work is concerned with the field of static program analysis —in particular with analyses aimed to guarantee certain security properties of programs, like confidentiality and integrity. Our approach uses socalled dependence graphs to capture the program behavior as well as the information flow between the individual program points. Using this technique, we can guarantee for example that a program does not reveal any information about a secret password. In particular we focus on techniques that improve the dependence graph computation —the basis for many advanced security analyses. We incorporated the presented algorithms and improvements into our analysis tool Joana and published its source code as open source. Several collaborations with other researchers and publications using Joana demonstrate the relevance of these improvements for practical research. This work consists essentially of three parts. Part 1 deals with improvements in the computation of the dependence graph, Part 2 introduces a new approach to the analysis of incomplete programs and Part 3 shows current use cases of Joana on concrete examples. In the first part we describe the algorithms used to compute a dependence graph, with special attention to the problems and challenges that arise when analyzing object-oriented languages such as Java. For example we present an analysis that improves the precision of detected control flow by incorporating the effects of exceptions. The main improvement concerns the way side effects —caused by communication over methods boundaries— are modelled. Dependence graphs capture side effects —memory locations read or changed by a method— in the form of additional nodes called parameter nodes. We show that the structure and computation of these nodes have a huge impact on both the precision and scalability of the entire analysis. The so-called parameter model describes the algorithms used to compute these nodes. We explain the weakness of the old parameter model based on object-trees and present our improvements in form of a new model using object-graphs. The new graph structure merges redundant information of multiple nodes into a single node and thus reduces the number of overall parameter nodes",
"title": ""
},
{
"docid": "neg:1840453_6",
"text": "Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.",
"title": ""
},
{
"docid": "neg:1840453_7",
"text": "In this paper we applied multilabel classification algorithms to the EUR-Lex database of legal documents of the European Union. On this document collection, we studied three different multilabel classification problems, the largest being the categorization into the EUROVOC concept hierarchy with almost 4000 classes. We evaluated three algorithms: (i) the binary relevance approach which independently trains one classifier per label; (ii) the multiclass multilabel perceptron algorithm, which respects dependencies between the base classifiers; and (iii) the multilabel pairwise perceptron algorithm, which trains one classifier for each pair of labels. All algorithms use the simple but very efficient perceptron algorithm as the underlying classifier, which makes them very suitable for large-scale multilabel classification problems. The main challenge we had to face was that the almost 8,000,000 perceptrons that had to be trained in the pairwise setting could no longer be stored in memory. We solve this problem by resorting to the dual representation of the perceptron, which makes the pairwise approach feasible for problems of this size. The results on the EUR-Lex database confirm the good predictive performance of the pairwise approach and demonstrates the feasibility of this approach for large-scale tasks.",
"title": ""
},
{
"docid": "neg:1840453_8",
"text": "The psychology of conspiracy theory beliefs is not yet well understood, although research indicates that there are stable individual differences in conspiracist ideation - individuals' general tendency to engage with conspiracy theories. Researchers have created several short self-report measures of conspiracist ideation. These measures largely consist of items referring to an assortment of prominent conspiracy theories regarding specific real-world events. However, these instruments have not been psychometrically validated, and this assessment approach suffers from practical and theoretical limitations. Therefore, we present the Generic Conspiracist Beliefs (GCB) scale: a novel measure of individual differences in generic conspiracist ideation. The scale was developed and validated across four studies. In Study 1, exploratory factor analysis of a novel 75-item measure of non-event-based conspiracist beliefs identified five conspiracist facets. The 15-item GCB scale was developed to sample from each of these themes. Studies 2, 3, and 4 examined the structure and validity of the GCB, demonstrating internal reliability, content, criterion-related, convergent and discriminant validity, and good test-retest reliability. In sum, this research indicates that the GCB is a psychometrically sound and practically useful measure of conspiracist ideation, and the findings add to our theoretical understanding of conspiracist ideation as a monological belief system unpinned by a relatively small number of generic assumptions about the typicality of conspiratorial activity in the world.",
"title": ""
},
{
"docid": "neg:1840453_9",
"text": "In this paper, we develop a system for training human calligraphy skills. For such a development, the so-called dynamic font and augmented reality (AR) are employed. The dynamic font is used to generate a model character, in which the character are formed as the result of 3-dimensional motion of a virtual writing device on a virtual writing plane. Using the AR technology, we then produce a visual information consisting of not only static writing path but also dynamic writing process of model character. Such a visual information of model character is given some trainee through a head mounted display. The performance is demonstrated by some experimental studies.",
"title": ""
},
{
"docid": "neg:1840453_10",
"text": "The central nervous system (CNS) operates by a fine-tuned balance between excitatory and inhibitory signalling. In this context, the inhibitory neurotransmission may be of particular interest as it has been suggested that such neuronal pathways may constitute 'command pathways' and the principle of 'dis-inhibition' leading ultimately to excitation may play a fundamental role (Roberts, E. (1974). Adv. Neurol., 5: 127-143). The neurotransmitter responsible for this signalling is gamma-aminobutyrate (GABA) which was first discovered in the CNS as a curious amino acid (Roberts, E., Frankel, S. (1950). J. Biol. Chem., 187: 55-63) and later proposed as an inhibitory neurotransmitter (Curtis, D.R., Watkins, J.C. (1960). J. Neurochem., 6: 117-141; Krnjevic, K., Schwartz, S. (1967). Exp. Brain Res., 3: 320-336). The present review will describe aspects of GABAergic neurotransmission related to homeostatic mechanisms such as biosynthesis, metabolism, release and inactivation. Additionally, pharmacological and therapeutic aspects of this will be discussed.",
"title": ""
},
{
"docid": "neg:1840453_11",
"text": "In this paper, we study an important yet largely under-explored setting of graph embedding, i.e., embedding communities instead of each individual nodes. We find that community embedding is not only useful for community-level applications such as graph visualization, but also beneficial to both community detection and node classification. To learn such embedding, our insight hinges upon a closed loop among community embedding, community detection and node embedding. On the one hand, node embedding can help improve community detection, which outputs good communities for fitting better community embedding. On the other hand, community embedding can be used to optimize the node embedding by introducing a community-aware high-order proximity. Guided by this insight, we propose a novel community embedding framework that jointly solves the three tasks together. We evaluate such a framework on multiple real-world datasets, and show that it improves graph visualization and outperforms state-of-the-art baselines in various application tasks, e.g., community detection and node classification.",
"title": ""
},
{
"docid": "neg:1840453_12",
"text": "Medical imaging plays a central role in a vast range of healthcare practices. The usefulness of 3D visualizations has been demonstrated for many types of treatment planning. Nevertheless, full access to 3D renderings outside of the radiology department is still scarce even for many image-centric specialties. Our work stems from the hypothesis that this under-utilization is partly due to existing visualization systems not taking the prerequisites of this application domain fully into account. We have developed a medical visualization table intended to better fit the clinical reality. The overall design goals were two-fold: similarity to a real physical situation and a very low learning threshold. This paper describes the development of the visualization table with focus on key design decisions. The developed features include two novel interaction components for touch tables. A user study including five orthopedic surgeons demonstrates that the system is appropriate and useful for this application domain.",
"title": ""
},
{
"docid": "neg:1840453_13",
"text": "Agile programming involves continually evolving requirements along with a possible change in their business value and an uncertainty in their time of development. This leads to the difficulty in adapting the release plans according to the response of the environment at each iteration step. This paper shows how a machine learning approach can support the release planning process in an agile environment. The objective is to adapt the release plans according to the results of the previous iterations in the present environment . Reinforcement learning technique has been used to learn the release planning process in an environment of various constraints and multiple objectives. The technique has been applied to a case study to show the utility of the method. The simulation results show that the reinforcement technique can be easily integrated into the release planning process. The teams can learn from the previous iterations and incorporate the learning into the release plans",
"title": ""
},
{
"docid": "neg:1840453_14",
"text": "This paper outlines a new approach to the study of power, that of the sociology of translation. Starting from three principles, those of agnosticism (impartiality between actors engaged in controversy), generalised symmetry (the commitment to explain conflicting viewpoints in the same terms) and free association (the abandonment of all a priori distinctions between the natural and the social), the paper describes a scientific and economic controversy about the causes for the decline in the population of scallops in St. Brieuc Bay and the attempts by three marine biologists to develop a conservation strategy for that population. Four ‘moments’ of translation are discerned in the attempts by these researchers to impose themselves and their definition of the situation on others: (a) problematisation: the researchers sought to become indispensable to other actors in the drama by defining the nature and the problems of the latter and then suggesting that these would be resolved if the actors negotiated the ‘obligatory passage point’ of the researchers’ programme of investigation; (b) interessement: a series of processes by which the researchers sought to lock the other actors into the roles that had been proposed for them in that programme; (c) enrolment: a set of strategies in which the researchers sought to define and interrelate the various roles they had allocated to others; (d) mobilisation: a set of methods used by the researchers to ensure that supposed spokesmen for various relevant collectivities were properly able to represent those collectivities and not betrayed by the latter. In conclusion it is noted that translation is a process, never a completed accomplishment, and it may (as in the empirical case considered) fail.",
"title": ""
},
{
"docid": "neg:1840453_15",
"text": "Visual Text Analytics has been an active area of interdisciplinary research (http://textvis.lnu.se/). This interactive tutorial is designed to give attendees an introduction to the area of information visualization, with a focus on linguistic visualization. After an introduction to the basic principles of information visualization and visual analytics, this tutorial will give an overview of the broad spectrum of linguistic and text visualization techniques, as well as their application areas [3]. This will be followed by a hands-on session that will allow participants to design their own visualizations using tools (e.g., Tableau), libraries (e.g., d3.js), or applying sketching techniques [4]. Some sample datasets will be provided by the instructor. Besides general techniques, special access will be provided to use the VisArgue framework [1] for the analysis of selected datasets.",
"title": ""
},
{
"docid": "neg:1840453_16",
"text": "OBJECTIVE\nOur previous study has found that circulating microRNA (miRNA, or miR) -122, -140-3p, -720, -2861, and -3149 are significantly elevated during early stage of acute coronary syndrome (ACS). This study was conducted to determine the origin of these elevated plasma miRNAs in ACS.\n\n\nMETHODS\nqRT-PCR was performed to detect the expression profiles of these 5 miRNAs in liver, spleen, lung, kidney, brain, skeletal muscles, and heart. To determine their origins, these miRNAs were detected in myocardium of acute myocardial infarction (AMI), and as well in platelets and peripheral blood mononuclear cells (PBMCs, including monocytes, circulating endothelial cells (CECs) and lymphocytes) of the AMI pigs and ACS patients.\n\n\nRESULTS\nMiR-122 was specifically expressed in liver, and miR-140-3p, -720, -2861, and -3149 were highly expressed in heart. Compared with the sham pigs, miR-122 was highly expressed in the border zone of the ischemic myocardium in the AMI pigs without ventricular fibrillation (P < 0.01), miR-122 and -720 were decreased in platelets of the AMI pigs, and miR-122, -140-3p, -720, -2861, and -3149 were increased in PBMCs of the AMI pigs (all P < 0.05). Compared with the non-ACS patients, platelets miR-720 was decreased and PBMCs miR-122, -140-3p, -720, -2861, and -3149 were increased in the ACS patients (all P < 0.01). Furthermore, PBMCs miR-122, -720, and -3149 were increased in the AMI patients compared with the unstable angina (UA) patients (all P < 0.05). Further origin identification revealed that the expression levels of miR-122 in CECs and lymphocytes, miR-140-3p and -2861 in monocytes and CECs, miR-720 in monocytes, and miR-3149 in CECs were greatly up-regulated in the ACS patients compared with the non-ACS patients, and were higher as well in the AMI patients than that in the UA patients except for the miR-122 in CECs (all P < 0.05).\n\n\nCONCLUSION\nThe elevated plasma miR-122, -140-3p, -720, -2861, and -3149 in the ACS patients were mainly originated from CECs and monocytes.",
"title": ""
},
{
"docid": "neg:1840453_17",
"text": "The timeliness and synchronization requirements of multimedia data demand e&ient buffer management and disk access schemes for multimedia database systems. The data rates involved are very high and despite the developmenl of eficient storage and retrieval strategies, disk I/O is a potential bottleneck, which limits the number of concurrent sessions supported by a system. This calls for more eficient use of data that has already been brought into the buffer. We introduce the notion of continuous media caching, which is a simple and novel technique where data that have been played back by a user are preserved in a controlled fashion for use by subsequent users requesting the same data. We present heuristics to determine when continuous media sharing is beneficial and describe the bufler management algorithms. Simulation studies indicate that our technique substantially improves the performance of multimedia database applications where data sharing is possible.",
"title": ""
},
{
"docid": "neg:1840453_18",
"text": "Interactive Narrative is an approach to interactive entertainment that enables the player to make decisions that directly affect the direction and/or outcome of the narrative experience being delivered by the computer system. Interactive narrative requires two seemingly conflicting requirements: coherent narrative and user agency. We present an interactive narrative system that uses a combination of narrative control and autonomous believable character agents to augment a story world simulation in which the user has a high degree of agency with narrative plot control. A drama manager called the Automated Story Director gives plot-based guidance to believable agents. The believable agents are endowed with the autonomy necessary to carry out directives in the most believable fashion possible. Agents also handle interaction with the user. When the user performs actions that change the world in such a way that the Automated Story Director can no longer drive the intended narrative forward, it is able to adapt the plot to incorporate the user’s changes and still achieve",
"title": ""
},
{
"docid": "neg:1840453_19",
"text": "© The Author(s) 2018. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creat iveco mmons .org/ publi cdoma in/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Oral presentations",
"title": ""
}
] |
1840454 | Monitoring body positions and movements during sleep using WISPs | [
{
"docid": "pos:1840454_0",
"text": "Improving the quality of healthcare and the prospects of \"aging in place\" using wireless sensor technology requires solving difficult problems in scale, energy management, data access, security, and privacy. We present AlarmNet, a novel system for assisted living and residential monitoring that uses a two-way flow of data and analysis between the front- and back-ends to enable context-aware protocols that are tailored to residents' individual patterns of living. AlarmNet integrates environmental, physiological, and activity sensors in a scalable heterogeneous architecture. The SenQ query protocol provides real-time access to data and lightweight in-network processing. Circadian activity rhythm analysis learns resident activity patterns and feeds them back into the network to aid context-aware power management and dynamic privacy policies.",
"title": ""
}
] | [
{
"docid": "neg:1840454_0",
"text": "We consider the problem of planning smooth paths for a vehicle in a region bounded by polygonal chains. The paths are represented as B-spline functions. A path is found by solving an optimization problem using a cost function designed to care for both the smoothness of the path and the safety of the vehicle. Smoothness is defined as small magnitude of the derivative of curvature and safety is defined as the degree of centering of the path between the polygonal chains. The polygonal chains are preprocessed in order to remove excess parts and introduce safety margins for the vehicle. The method has been implemented for use with a standard solver and tests have been made on application data provided by the Swedish mining company LKAB.",
"title": ""
},
{
"docid": "neg:1840454_1",
"text": "Traffic is the chief puzzle problem which every country faces because of the enhancement in number of vehicles throughout the world, especially in large urban towns. Hence the need arises for simulating and optimizing traffic control algorithms to better accommodate this increasing demand. Fuzzy optimization deals with finding the values of input parameters of a complex simulated system which result in desired output. This paper presents a MATLAB simulation of fuzzy logic traffic controller for controlling flow of traffic in isolated intersections. This controller is based on the waiting time and queue length of vehicles at present green phase and vehicles queue lengths at the other phases. The controller controls the traffic light timings and phase difference to ascertain sebaceous flow of traffic with least waiting time and queue length. In this paper, the isolated intersection model used consists of two alleyways in each approach. Every outlook has different value of queue length and waiting time, systematically, at the intersection. The maximum value of waiting time and vehicle queue length has to be selected by using proximity sensors as inputs to controller for the ameliorate control traffic flow at the intersection. An intelligent traffic model and fuzzy logic traffic controller are developed to evaluate the performance of traffic controller under different pre-defined conditions for oleaginous flow of traffic. Additionally, this fuzzy logic traffic controller has emergency vehicle siren sensors which detect emergency vehicle movement like ambulance, fire brigade, Police Van etc. and gives maximum priority to him and pass preferred signal to it. Keywords-Fuzzy Traffic Controller; Isolated Intersection; Vehicle Actuated Controller; Emergency Vehicle Selector.",
"title": ""
},
{
"docid": "neg:1840454_2",
"text": "The field of Big Data and related technologies is rapidly evolving. Consequently, many benchmarks are emerging, driven by academia and industry alike. As these benchmarks are emphasizing different aspects of Big Data and, in many cases, covering different technical platforms and uses cases, it is extremely difficult to keep up with the pace of benchmark creation. Also with the combinations of large volumes of data, heterogeneous data formats and the changing processing velocity, it becomes complex to specify an architecture which best suits all application requirements. This makes the investigation and standardization of such systems very difficult. Therefore, the traditional way of specifying a standardized benchmark with pre-defined workloads, which have been in use for years in the transaction and analytical processing systems, is not trivial to employ for Big Data systems. This document provides a summary of existing benchmarks and those that are in development, gives a side-by-side comparison of their characteristics and discusses their pros and cons. The goal is to understand the current state in Big Data benchmarking and guide practitioners in their approaches and use cases.",
"title": ""
},
{
"docid": "neg:1840454_3",
"text": "Recent emergence of low-cost and easy-operating depth cameras has reinvigorated the research in skeleton-based human action recognition. However, most existing approaches overlook the intrinsic interdependencies between skeleton joints and action classes, thus suffering from unsatisfactory recognition performance. In this paper, a novel latent max-margin multitask learning model is proposed for 3-D action recognition. Specifically, we exploit skelets as the mid-level granularity of joints to describe actions. We then apply the learning model to capture the correlations between the latent skelets and action classes each of which accounts for a task. By leveraging structured sparsity inducing regularization, the common information belonging to the same class can be discovered from the latent skelets, while the private information across different classes can also be preserved. The proposed model is evaluated on three challenging action data sets captured by depth cameras. Experimental results show that our model consistently achieves superior performance over recent state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840454_4",
"text": "In the perspective of a sustainable urban planning, it is necessary to investigate cities in a holistic way and to accept surprises in the response of urban environments to a particular set of strategies. For example, the process of inner-city densification may limit air pollution, carbon emissions, and energy use through reduced transportation; on the other hand, the resulting street canyons could lead to local levels of pollution that could be higher than in a low-density urban setting. The holistic approach to sustainable urban planning implies using different models in an integrated way that is capable of simulating the urban system. As the interconnection of such models is not a trivial task, one of the key elements that may be applied is the description of the urban geometric properties in an “interoperable” way. Focusing on air quality as one of the most pronounced urban problems, the geometric aspects of a city may be described by objects such as those defined in CityGML, so that an appropriate air quality model can be applied for estimating the quality of the urban air on the basis of atmospheric flow and chemistry equations. It is generally admitted that an ontology-based approach can provide a generic and robust way to interconnect different models. However, a direct approach, that consists in establishing correspondences between concepts, is not sufficient in the present situation. One has to take into account, among other things, the computations involved in the correspondences between concepts. In this paper we first present theoretical background and motivations for the interconnection of 3D city models and other models related to sustainable development and urban planning. Then we present a practical experiment based on the interconnection of CityGML with an air quality model. Our approach is based on the creation of an ontology of air quality models and on the extension of an ontology of urban planning process (OUPP) that acts as an ontology mediator.",
"title": ""
},
{
"docid": "neg:1840454_5",
"text": "The idea that general intelligence may be more variable in males than in females has a long history. In recent years it has been presented as a reason that there is little, if any, mean sex difference in general intelligence, yet males tend to be overrepresented at both the top and bottom ends of its overall, presumably normal, distribution. Clear analysis of the actual distribution of general intelligence based on large and appropriately population-representative samples is rare, however. Using two population-wide surveys of general intelligence in 11-year-olds in Scotland, we showed that there were substantial departures from normality in the distribution, with less variability in the higher range than in the lower. Despite mean IQ-scale scores of 100, modal scores were about 105. Even above modal level, males showed more variability than females. This is consistent with a model of the population distribution of general intelligence as a mixture of two essentially normal distributions, one reflecting normal variation in general intelligence and one refecting normal variation in effects of genetic and environmental conditions involving mental retardation. Though present at the high end of the distribution, sex differences in variability did not appear to account for sex differences in high-level achievement.",
"title": ""
},
{
"docid": "neg:1840454_6",
"text": "In this paper, solutions for developing low cost electronics for antenna transceivers that take advantage of the stable electrical properties of the organic substrate liquid crystal polymer (LCP) has been presented. Three important ingredients in RF wireless transceivers namely embedded passives, a dual band filter and a RFid antenna have been designed and fabricated on LCP. Test results of all 3 of the structures show good agreement between the simulated and measured results over their respective bandwidths, demonstrating stable performance of the LCP substrate.",
"title": ""
},
{
"docid": "neg:1840454_7",
"text": "This paper describes a version of the auditory image model (AIM) [1] implemented in MATLAB. It is referred to as “aim-mat” and it includes the basic modules that enable AIM to simulate the spectral analysis, neural encoding and temporal integration performed by the auditory system. The dynamic representations produced by non-static sounds can be viewed on a frame-by-frame basis or in movies with synchronized sound. The software has a sophisticated graphical user interface designed to facilitate the auditory modelling. It is also possible to add MATLAB code and complete modules to aim-mat. The software can be downloaded from http://www.mrccbu.cam.ac.uk/cnbh/aimmanual",
"title": ""
},
{
"docid": "neg:1840454_8",
"text": "In exploring the question of whether a computer program is behaving creatively, it is important to be explicit, and if possible formal, about the criteria that are being applied in making judgements of creativity. We propose a formal (and rather simplified) outline of the relevant attributes of a potentially creative program. Based on this, we posit a number of formal criteria that could be applied to rate the extent to which the program has behaved creatively. A guiding principle is that the question of what computational mechanisms might lead to creative behaviour is open and empirical, and hence we should clearly distinguish between judgements about creative achievements and theoretical proposals about potentially creative mechanisms. The intention is to focus, clarify and make more concrete the debate about creative",
"title": ""
},
{
"docid": "neg:1840454_9",
"text": "Meyer has recently introduced an image decomposition model to split an image into two components: a geometrical component and a texture (oscillatory) component. Inspired by his work, numerical models have been developed to carry out the decomposition of gray scale images. In this paper, we propose a decomposition algorithm for color images. We introduce a generalization of Meyer s G norm to RGB vectorial color images, and use Chromaticity and Brightness color model with total variation minimization. We illustrate our approach with numerical examples. 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840454_10",
"text": "The purpose of this study is to analyze the relationship among Person Organization Fit (POF), Organizational Commitment (OC) and Knowledge Sharing Attitude (KSA). The paper develops a conceptual frame based on a theory and literature review. A quantitative approach has been used to measure the level of POF and OC as well as to explore the relationship of these variables with KSA & with each other by using a sample of 315 academic managers of public sector institutions of higher education. POF has a positive relationship with OC and KSA. A positive relationship also exists between OC and KSA. It would be an effective contribution in the existing body of knowledge. Managers and other stakeholders may be helped to recognize the significance of POF, OC and KSA as well as their relationship with each other for ensuring selection of employee’s best fitted in the organization and for creating and maintaining a conducive environment for improving organizational commitment and knowledge sharing of the employees which will ultimately result in enhanced efficacy and effectiveness of the organization.",
"title": ""
},
{
"docid": "neg:1840454_11",
"text": "This study examined the impact of mobile communications on interpersonal relationships in daily life. Based on a nationwide survey in Japan, landline phone, mobile voice phone, mobile mail (text messaging), and PC e-mail were compared to assess their usage in terms of social network and psychological factors. The results indicated that young, nonfamily-related pairs of friends, living close to each other with frequent faceto-face contact were more likely to use mobile media. Social skill levels are negatively correlated with relative preference for mobile mail in comparison with mobile voice phone. These findings suggest that mobile mail is preferable for Japanese young people who tend to avoid direct communication and that its use maintains existing bonds rather than create new ones.",
"title": ""
},
{
"docid": "neg:1840454_12",
"text": "This paper examines the role of IT in developing collaborative consumption. We present a study of the multi-sided platform goCatch, which is widely recognized as a mobile application and digital disruptor in the Australian transport industry. From our investigation, we find that goCatch uses IT to create situational-based and object-based opportunities to enable collaborative consumption and in turn digital disruption to the incumbent industry. We also highlight the factors to consider in developing a mobile application to connect with customers, and serve as a viable competitive option for responding to competition. Such research is necessary in order to better understand how service providers extract business value from digital technologies to formulate new breakthrough strategies, design compelling new products and services, and transform management processes. Ongoing work will reveal how m-commerce service providers can extract business value from a collaborative consumption model.",
"title": ""
},
{
"docid": "neg:1840454_13",
"text": "The first pathologic alterations of the retina are seen in the vessel network. These modifications affect very differently arteries and veins, and the appearance and entity of the modification differ as the retinopathy becomes milder or more severe. In order to develop an automatic procedure for the diagnosis and grading of retinopathy, it is necessary to be able to discriminate arteries from veins. The problem is complicated by the similarity in the descriptive features of these two structures and by the contrast and luminosity variability of the retina. We developed a new algorithm for classifying the vessels, which exploits the peculiarities of retinal images. By applying a divide et imperaapproach that partitioned a concentric zone around the optic disc into quadrants, we were able to perform a more robust local classification analysis. The results obtained by the proposed technique were compared with those provided by a manual classification on a validation set of 443 vessels and reached an overall classification error of 12 %, which reduces to 7 % if only the diagnostically important retinal vessels are considered.",
"title": ""
},
{
"docid": "neg:1840454_14",
"text": "BACKGROUND\nHuman leech infestation is a disease of the poor who live in rural areas and use water contaminated with leeches. Like any other body orifices, vagina can also be infested by leech when females use contaminated water for bathing and/or douching. Although this condition is very rare in postmenopausal women, it causes morbidities and mortalities.\n\n\nCASE DETAILS\nA 70 year old Para X (all alive) abortion I mother, postmenopausal for the last 20 years, presented with vaginal bleeding of 3 weeks duration to Gimbie Adventist Hospital, Western Ethiopia. On examination, she had deranged vital signs and there was a dark moving worm attached to the cervical os. She was admitted with the diagnosis of hypovolumic shock and severe anemia secondary to postmenopausal vaginal bleeding. After the patient was stabilized with intravenous crystalloids, the leech was removed from the vagina. She was then transfused with two units of whole blood and discharged with good condition on the 3(rd) post procedure day with ferrous sulphate.\n\n\nCONCLUSION\nVaginal leech infestation in postmenopausal woman can cause hypovolumic shock and severe anemia. Therefore, in order to decrease morbidities from failure or delay in making the diagnosis, health care providers should consider the possibility of vaginal leech infestation in postmenopausal woman from rural areas and those who use river water for drinking, bathing and/or douching and presented with vaginal bleeding. In addition, the importance of using clean water and improving access to safe water should be emphasized.",
"title": ""
},
{
"docid": "neg:1840454_15",
"text": "Line labelling has been used to determine whether a two-dimensional (2D) line drawing object is a possible or impossible representation of a three-dimensional (3D) solid object. However, the results are not sufficiently robust because the existing line labelling methods do not have any validation method to verify their own result. In this research paper, the concept of graph colouring is applied to a validation technique for a labelled 2D line drawing. As a result, a graph colouring algorithm for validating labelled 2D line drawings is presented. A high-level programming language, MATLAB R2009a, and two primitive 2D line drawing classes, prism and pyramid are used to show how the algorithms can be implemented. The proposed algorithm also shows that the minimum number of colours needed to colour the labelled 2D line drawing object is equal to 3 for prisms and 1 n − for pyramids, where n is the number of vertices (junctions) in the pyramid objects.",
"title": ""
},
{
"docid": "neg:1840454_16",
"text": "Multi-word expressions constitute a significant portion of the lexicon of every natural language, and handling them correctly is mandatory for various NLP applications. Yet such entities are notoriously hard to define, and are consequently missing from standard lexicons and dictionaries. Multi-word expressions exhibit idiosyncratic behavior on various levels: orthographic, morphological, syntactic and semantic. In this work we take advantage of the morphological and syntactic idiosyncrasy of Hebrew noun compounds and employ it to extract such expressions from text corpora. We show that relying on linguistic information dramatically improves the accuracy of compound extraction, reducing over one third of the errors compared with the best baseline.",
"title": ""
},
{
"docid": "neg:1840454_17",
"text": "This review captures the synthesis, assembly, properties, and applications of copper chalcogenide NCs, which have achieved significant research interest in the last decade due to their compositional and structural versatility. The outstanding functional properties of these materials stems from the relationship between their band structure and defect concentration, including charge carrier concentration and electronic conductivity character, which consequently affects their optoelectronic, optical, and plasmonic properties. This, combined with several metastable crystal phases and stoichiometries and the low energy of formation of defects, makes the reproducible synthesis of these materials, with tunable parameters, remarkable. Further to this, the review captures the progress of the hierarchical assembly of these NCs, which bridges the link between their discrete and collective properties. Their ubiquitous application set has cross-cut energy conversion (photovoltaics, photocatalysis, thermoelectrics), energy storage (lithium-ion batteries, hydrogen generation), emissive materials (plasmonics, LEDs, biolabelling), sensors (electrochemical, biochemical), biomedical devices (magnetic resonance imaging, X-ray computer tomography), and medical therapies (photochemothermal therapies, immunotherapy, radiotherapy, and drug delivery). The confluence of advances in the synthesis, assembly, and application of these NCs in the past decade has the potential to significantly impact society, both economically and environmentally.",
"title": ""
},
{
"docid": "neg:1840454_18",
"text": "Few concepts embody the goals of artificial intelligence as well as fully autonomous robots. Countless films and stories have been made that focus on a future filled with autonomous agents that complete menial tasks or run errands that humans do not want or are too busy to carry out. One such task is driving automobiles. In this paper, we summarize the work we have done towards a future of fully-autonomous vehicles, specifically coordinating such vehicles safely and efficiently at intersections. We then discuss the implications this work has for other areas of AI, including planning, multiagent learning, and computer vision.",
"title": ""
},
{
"docid": "neg:1840454_19",
"text": "The visual appearance of rain is highly complex. Unlike the particles that cause other weather conditions such as haze and fog, rain drops are large and visible to the naked eye. Each drop refracts and reflects both scene radiance and environmental illumination towards an observer. As a result, a spatially distributed ensemble of drops moving at high velocities (rain) produces complex spatial and temporal intensity fluctuations in images and videos. To analyze the effects of rain, it is essential to understand the visual appearance of a single rain drop. In this paper, we develop geometric and photometric models for the refraction through, and reflection (both specular and internal) from, a rain drop. Our geometric and photometric models show that each rain drop behaves like a wide-angle lens that redirects light from a large field of view towards the observer. From this, we observe that in spite of being a transparent object, the brightness of the drop does not depend strongly on the brightness of the background. Our models provide the fundamental tools to analyze the complex effects of rain. Thus, we believe our work has implications for vision in bad weather as well as for efficient rendering of rain in computer graphics.",
"title": ""
}
] |
1840455 | A Novel Variable Reluctance Resolver with Nonoverlapping Tooth–Coil Windings | [
{
"docid": "pos:1840455_0",
"text": "A resolver generates a pair of signals proportional to the sine and cosine of the angular position of its shaft. A new low-cost method for converting the amplitudes of these sine/cosine transducer signals into a measure of the input angle without using lookup tables is proposed. The new method takes advantage of the components used to operate the resolver, the excitation (carrier) signal in particular. This is a feedforward method based on comparing the amplitudes of the resolver signals to those of the excitation signal together with another shifted by pi/2. A simple method is then used to estimate the shaft angle through this comparison technique. The poor precision of comparison of the signals around their highly nonlinear peak regions is avoided by using a simple technique that relies only on the alternating pseudolinear segments of the signals. This results in a better overall accuracy of the converter. Beside simplicity of implementation, the proposed scheme offers the advantage of robustness to amplitude fluctuation of the transducer excitation signal.",
"title": ""
},
{
"docid": "pos:1840455_1",
"text": "Variable reluctance (VR) resolver is widely used in traction motor for battery electric vehicle as well as hybrid electric vehicle as a rotor position sensor. VR resolver generates absolute position signal by using resolver-to-digital converter (RDC) in order to deliver exact position of permanent magnets in a rotor of traction motor to motor controller. This paper deals with fault diagnosis of VR resolver by using co-simulation analysis with RDC for position angle detection. As fault conditions, eccentricity of VR resolver, short circuit condition of excitation coil and output signal coils, and material problem of silicon steel in a view point of permeability are considered. 2D FEM is used for the output signal waveforms of SIN, COS and these waveforms are converted into absolute position angle by using the algorithm of RDC. For the verification of proposed analysis results, experiment on fault conditions was conducted and compared with simulation ones.",
"title": ""
}
] | [
{
"docid": "neg:1840455_0",
"text": "Environmental isolates belonging to the genus Acidovorax play a crucial role in degrading a wide range of pollutants. Studies on Acidovorax are currently limited for many species due to the lack of genetic tools. Here, we described the use of the replicon from a small, cryptic plasmid indigenous to Acidovorx temperans strain CB2, to generate stably maintained shuttle vectors. In addition, we have developed a scarless gene knockout technique, as well as establishing green fluorescent protein (GFP) reporter and complementation systems. Taken collectively, these tools will improve genetic manipulations in the genus Acidovorax.",
"title": ""
},
{
"docid": "neg:1840455_1",
"text": "A low-noise front-end and its controller are proposed for capacitive touch screen panels. The proposed front-end circuit based on a ΔΣ ADC uses differential sensing and integration scheme to maximize the input dynamic range. In addition, supply and internal reference voltage noise are effectively removed in the sensed touch signal. Furthermore, the demodulation process in front of the ΔΣ ADC provides the maximized oversampling ratio (OSR) so that the scan rate can be increased at the targeted resolution. The proposed IC is implemented in a mixed-mode 0.18-μm CMOS process. The measurement is performed on a bar-patterned 4.3-inch touch screen panel with 12 driving lines and 8 sensing channels. The report rate is 100 Hz, and SNR and spatial jitter are 54 dB and 0.11 mm, respectively. The chip area is 3 × 3 mm2 and total power consumption is 2.9 mW with 1.8-V and 3.3-V supply.",
"title": ""
},
{
"docid": "neg:1840455_2",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "neg:1840455_3",
"text": "INTRODUCTION\nThe increasing use of cone-beam computed tomography in orthodontics has been coupled with heightened concern about the long-term risks of x-ray exposure in orthodontic populations. An industry response to this has been to offer low-exposure alternative scanning options in newer cone-beam computed tomography models.\n\n\nMETHODS\nEffective doses resulting from various combinations of field of view size and field location comparing child and adult anthropomorphic phantoms with the recently introduced i-CAT FLX cone-beam computed tomography unit (Imaging Sciences, Hatfield, Pa) were measured with optical stimulated dosimetry using previously validated protocols. Scan protocols included high resolution (360° rotation, 600 image frames, 120 kV[p], 5 mA, 7.4 seconds), standard (360°, 300 frames, 120 kV[p], 5 mA, 3.7 seconds), QuickScan (180°, 160 frames, 120 kV[p], 5 mA, 2 seconds), and QuickScan+ (180°, 160 frames, 90 kV[p], 3 mA, 2 seconds). Contrast-to-noise ratio was calculated as a quantitative measure of image quality for the various exposure options using the QUART DVT phantom.\n\n\nRESULTS\nChild phantom doses were on average 36% greater than adult phantom doses. QuickScan+ protocols resulted in significantly lower doses than standard protocols for the child (P = 0.0167) and adult (P = 0.0055) phantoms. The 13 × 16-cm cephalometric fields of view ranged from 11 to 85 μSv in the adult phantom and 18 to 120 μSv in the child phantom for the QuickScan+ and standard protocols, respectively. The contrast-to-noise ratio was reduced by approximately two thirds when comparing QuickScan+ with standard exposure parameters.\n\n\nCONCLUSIONS\nQuickScan+ effective doses are comparable with conventional panoramic examinations. Significant dose reductions are accompanied by significant reductions in image quality. However, this trade-off might be acceptable for certain diagnostic tasks such as interim assessment of treatment results.",
"title": ""
},
{
"docid": "neg:1840455_4",
"text": "This paper explores how the remaining useful life (RUL) can be assessed for complex systems whose internal state variables are either inaccessible to sensors or hard to measure under operational conditions. Consequently, inference and estimation techniques need to be applied on indirect measurements, anticipated operational conditions, and historical data for which a Bayesian statistical approach is suitable. Models of electrochemical processes in the form of equivalent electric circuit parameters were combined with statistical models of state transitions, aging processes, and measurement fidelity in a formal framework. Relevance vector machines (RVMs) and several different particle filters (PFs) are examined for remaining life prediction and for providing uncertainty bounds. Results are shown on battery data.",
"title": ""
},
{
"docid": "neg:1840455_5",
"text": "White matter hyperintensities (WMHs) in the brain are the consequence of cerebral small vessel disease, and can easily be detected on MRI. Over the past three decades, research has shown that the presence and extent of white matter hyperintense signals on MRI are important for clinical outcome, in terms of cognitive and functional impairment. Large, longitudinal population-based and hospital-based studies have confirmed a dose-dependent relationship between WMHs and clinical outcome, and have demonstrated a causal link between large confluent WMHs and dementia and disability. Adequate differential diagnostic assessment and management is of the utmost importance in any patient, but most notably those with incipient cognitive impairment. Novel imaging techniques such as diffusion tensor imaging might reveal subtle damage before it is visible on standard MRI. Even in Alzheimer disease, which is thought to be primarily caused by amyloid, vascular pathology, such as small vessel disease, may be of greater importance than amyloid itself in terms of influencing the disease course, especially in older individuals. Modification of risk factors for small vessel disease could be an important therapeutic goal, although evidence for effective interventions is still lacking. Here, we provide a timely Review on WMHs, including their relationship with cognitive decline and dementia.",
"title": ""
},
{
"docid": "neg:1840455_6",
"text": "Trust models have been recently suggested as an effective security mechanism for Wireless Sensor Networks (WSNs). Considerable research has been done on modeling trust. However, most current research work only takes communication behavior into account to calculate sensor nodes' trust value, which is not enough for trust evaluation due to the widespread malicious attacks. In this paper, we propose an Efficient Distributed Trust Model (EDTM) for WSNs. First, according to the number of packets received by sensor nodes, direct trust and recommendation trust are selectively calculated. Then, communication trust, energy trust and data trust are considered during the calculation of direct trust. Furthermore, trust reliability and familiarity are defined to improve the accuracy of recommendation trust. The proposed EDTM can evaluate trustworthiness of sensor nodes more precisely and prevent the security breaches more effectively. Simulation results show that EDTM outperforms other similar models, e.g., NBBTE trust model.",
"title": ""
},
{
"docid": "neg:1840455_7",
"text": "A key challenge in fine-grained recognition is how to find and represent discriminative local regions. Recent attention models are capable of learning discriminative region localizers only from category labels with reinforcement learning. However, not utilizing any explicit part information, they are not able to accurately find multiple distinctive regions. In this work, we introduce an attribute-guided attention localization scheme where the local region localizers are learned under the guidance of part attribute descriptions. By designing a novel reward strategy, we are able to learn to locate regions that are spatially and semantically distinctive with reinforcement learning algorithm. The attribute labeling requirement of the scheme is more amenable than the accurate part location annotation required by traditional part-based fine-grained recognition methods. Experimental results on the CUB-200-2011 dataset [1] demonstrate the superiority of the proposed scheme on both fine-grained recognition and attribute recognition.",
"title": ""
},
{
"docid": "neg:1840455_8",
"text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.",
"title": ""
},
{
"docid": "neg:1840455_9",
"text": "For robot tutors, autonomy and personalizations are important factors in order to engage users as well as to personalize the content and interaction according to the needs of individuals. is paper presents the Programming Cognitive Robot (ProCRob) soware architecture to target personalized social robotics in two complementary ways. ProCRob supports the development and personalization of social robot applications by teachers and therapists without computer programming background. It also supports the development of autonomous robots which can adapt according to the human-robot interaction context. ProCRob is based on our previous research on autonomous robotics and has been developed since 2015 by a multi-disciplinary team of researchers from the elds of AI, Robotics and Psychology as well as artists and designers at the University of Luxembourg. ProCRob is currently being used and further developed for therapy of children with autism, and for encouraging rehabilitation activities in patients with post-stroke. is paper presents a summary of ProCRob and its application in autism.",
"title": ""
},
{
"docid": "neg:1840455_10",
"text": "We present an overview of Candide, a system for automatic translat ion of French text to English text. Candide uses methods of information theory and statistics to develop a probabili ty model of the translation process. This model, which is made to accord as closely as possible with a large body of French and English sentence pairs, is then used to generate English translations of previously unseen French sentences. This paper provides a tutorial in these methods, discussions of the training and operation of the system, and a summary of test results. 1. I n t r o d u c t i o n Candide is an experimental computer program, now in its fifth year of development at IBM, for translation of French text to Enghsh text. Our goal is to perform fuRy-automatic, high-quality text totext translation. However, because we are still far from achieving this goal, the program can be used in both fully-automatic and translator 's-assistant modes. Our approach is founded upon the statistical analysis of language. Our chief tools axe the source-channel model of communication, parametric probabili ty models of language and translation, and an assortment of numerical algorithms for training such models from examples. This paper presents elementary expositions of each of these ideas, and explains how they have been assembled to produce Caadide. In Section 2 we introduce the necessary ideas from information theory and statistics. The reader is assumed to know elementary probabili ty theory at the level of [1]. In Sections 3 and 4 we discuss our language and translation models. In Section 5 we describe the operation of Candide as it translates a French document. In Section 6 we present results of our internal evaluations and the AB.PA Machine Translation Project evaluations. Section 7 is a summary and conclusion. 2 . Stat is t ical Trans la t ion Consider the problem of translating French text to English text. Given a French sentence f , we imagine that it was originally rendered as an equivalent Enghsh sentence e. To obtain the French, the Enghsh was t ransmit ted over a noisy communication channel, which has the curious property that English sentences sent into it emerge as their French translations. The central assumption of Candide's design is that the characteristics of this channel can be determined experimentally, and expressed mathematically. *Current address: Renaissance Technologies, Stony Brook, NY ~ English-to-French I f e Channel \" _[ French-to-English -] Decoder 6 Figure 1: The Source-Channel Formalism of Translation. Here f is the French text to be translated, e is the putat ive original English rendering, and 6 is the English translation. This formalism can be exploited to yield French-to-English translations as follows. Let us write P r (e I f ) for the probability that e was the original English rendering of the French f. Given a French sentence f, the problem of automatic translation reduces to finding the English sentence tha t maximizes P.r(e I f) . That is, we seek 6 = argmsx e Pr (e I f) . By virtue of Bayes' Theorem, we have = argmax Pr(e If ) = argmax Pr(f I e)Pr(e) (1) e e The term P r ( f l e ) models the probabili ty that f emerges from the channel when e is its input. We call this function the translation model; its domain is all pairs (f, e) of French and English word-strings. The term Pr (e ) models the a priori probability that e was supp led as the channel input. We call this function the language model. Each of these fac tors the translation model and the language model independent ly produces a score for a candidate English translat ion e. The translation model ensures that the words of e express the ideas of f, and the language model ensures that e is a grammatical sentence. Candide sehcts as its translat ion the e that maximizes their product. This discussion begs two impor tant questions. First , where do the models P r ( f [ e) and Pr (e ) come from? Second, even if we can get our hands on them, how can we search the set of all English strings to find 6? These questions are addressed in the next two sections. 2.1. P robab i l i ty Models We begin with a brief detour into probabili ty theory. A probability model is a mathematical formula that purports to express the chance of some observation. A parametric model is a probability model with adjustable parameters, which can be changed to make the model bet ter match some body of data. Let us write c for a body of da ta to be modeled, and 0 for a vector of parameters. The quanti ty Prs (c ) , computed according to some formula involving c and 0, is called the hkelihood 157 [Human Language Technology, Plainsboro, 1994]",
"title": ""
},
{
"docid": "neg:1840455_11",
"text": "Atrophoderma vermiculata is a rare genodermatosis with usual onset in childhood, characterized by a \"honey-combed\" reticular atrophy of the cheeks. The course is generally slow, with progressive worsening. We report successful treatment of 2 patients by means of the carbon dioxide and 585 nm pulsed dye lasers.",
"title": ""
},
{
"docid": "neg:1840455_12",
"text": "The type III secretion (T3S) pathway allows bacteria to inject effector proteins into the cytosol of target animal or plant cells. T3S systems evolved into seven families that were distributed among Gram-negative bacteria by horizontal gene transfer. There are probably a few hundred effectors interfering with control and signaling in eukaryotic cells and offering a wealth of new tools to cell biologists.",
"title": ""
},
{
"docid": "neg:1840455_13",
"text": "We present an field programmable gate arrays (FPGA) based implementation of the popular Viola-Jones face detection algorithm, which is an essential building block in many applications such as video surveillance and tracking. Our implementation is a complete system level hardware design described in a hardware description language and validated on the affordable DE2-115 evaluation board. Our primary objective is to study the achievable performance with a low-end FPGA chip based implementation. In addition, we release to the public domain the entire project. We hope that this will enable other researchers to easily replicate and compare their results to ours and that it will encourage and facilitate further research and educational ideas in the areas of image processing, computer vision, and advanced digital design and FPGA prototyping. 2017 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "neg:1840455_14",
"text": "Information systems and intelligent knowledge processing are playing an increasing role in business, science and technology. Recently, advanced information systems have evolved to facilitate the co-evolution of human and information networks within communities. These advanced information systems use various paradigms including artificial intelligence, knowledge management, and neural science as well as conventional information processing paradigms.",
"title": ""
},
{
"docid": "neg:1840455_15",
"text": "In this paper we address the problem of offline Arabic handwriting word recognition. Offline recognition of handwritten words is a difficult task due to the high variability and uncertainty of human writing. The majority of the recent systems are constrained by the size of the lexicon to deal with and the number of writers. In this paper, we propose an approach for multi-writers Arabic handwritten words recognition using multiple Bayesian networks. First, we cut the image in several blocks. For each block, we compute a vector of descriptors. Then, we use K-means to cluster the low-level features including Zernik and Hu moments. Finally, we apply four variants of Bayesian networks classifiers (Naïve Bayes, Tree Augmented Naïve Bayes (TAN), Forest Augmented Naïve Bayes (FAN) and DBN (dynamic bayesian network) to classify the whole image of tunisian city name. The results demonstrate FAN and DBN outperform good recognition rates.",
"title": ""
},
{
"docid": "neg:1840455_16",
"text": "This paper proposes an asymmetrical pulse width modulation (APWM) with frequency tracking control of full bridge series resonant inverter for induction heating application. In this method, APWM is used as power regulation, and phased locked loop (PLL) is used to attain zero-voltage-switching (ZVS) over a wide load range. The complete closed loop control model is obtained using small signal analysis. The validity of the proposed control is verified by simulation results.",
"title": ""
},
{
"docid": "neg:1840455_17",
"text": "The present study made an attempt to analyze the existing buying behaviour of Instant Food Products by individual households and to predict the demand for Instant Food Products of Hyderabad city in Andra Padesh .All the respondents were aware of pickles and Sambar masala but only 56.67 per cent of respondents were aware of Dosa/Idli mix. About 96.11 per cent consumers of Dosa/Idli mix and more than half of consumers of pickles and Sambar masala prepared their own. Low cost of home preparation and differences in tastes were the major reasons for non consumption, whereas ready availability and save time of preparation were the reasons for consuming Instant Food Products. Retail shops are the major source of information and source of purchase of Instant Food Products. The average monthly expenditure on Instant Food Products was found to be highest in higher income groups. The average per capita purchase and per capita expenditure on Instant food Products had a positive relationship with income of households.High price and poor taste were the reasons for not purchasing particular brand whereas best quality, retailers influence and ready availability were considered for preferring particular brand of products by the consumers.",
"title": ""
},
{
"docid": "neg:1840455_18",
"text": "Internet of Things (IoT) is a fast-growing innovation that will greatly change the way humans live. It can be thought of as the next big step in Internet technology. What really enable IoT to be a possibility are the various technologies that build it up. The IoT architecture mainly requires two types of technologies: data acquisition technologies and networking technologies. Many technologies are currently present that aim to serve as components to the IoT paradigm. This paper aims to categorize the various technologies present that are commonly used by Internet of Things.",
"title": ""
},
{
"docid": "neg:1840455_19",
"text": "We develop a flexible Conditional Random Field framework for supervised preference aggregation, which combines preferences from multiple experts over items to form a distribution over rankings. The distribution is based on an energy comprised of unary and pairwise potentials allowing us to effectively capture correlations between both items and experts. We describe procedures for learning in this modelnand demonstrate that inference can be done much more efficiently thannin analogous models. Experiments on benchmark tasks demonstrate significant performance gains over existing rank aggregation methods.",
"title": ""
}
] |
1840456 | Weakly Supervised Object Localization Using Things and Stuff Transfer | [
{
"docid": "pos:1840456_0",
"text": "We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"title": ""
},
{
"docid": "pos:1840456_1",
"text": "In this paper, we tackle the problem of co-localization in real-world images. Co-localization is the problem of simultaneously localizing (with bounding boxes) objects of the same class across a set of distinct images. Although similar problems such as co-segmentation and weakly supervised localization have been previously studied, we focus on being able to perform co-localization in real-world settings, which are typically characterized by large amounts of intra-class variation, inter-class diversity, and annotation noise. To address these issues, we present a joint image-box formulation for solving the co-localization problem, and show how it can be relaxed to a convex quadratic program which can be efficiently solved. We perform an extensive evaluation of our method compared to previous state-of-the-art approaches on the challenging PASCAL VOC 2007 and Object Discovery datasets. In addition, we also present a large-scale study of co-localization on ImageNet, involving ground-truth annotations for 3, 624 classes and approximately 1 million images.",
"title": ""
}
] | [
{
"docid": "neg:1840456_0",
"text": "BACKGROUND\nMuscle weakness in old age is associated with physical function decline. Progressive resistance strength training (PRT) exercises are designed to increase strength.\n\n\nOBJECTIVES\nTo assess the effects of PRT on older people and identify adverse events.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Bone, Joint and Muscle Trauma Group Specialized Register (to March 2007), the Cochrane Central Register of Controlled Trials (The Cochrane Library 2007, Issue 2), MEDLINE (1966 to May 01, 2008), EMBASE (1980 to February 06 2007), CINAHL (1982 to July 01 2007) and two other electronic databases. We also searched reference lists of articles, reviewed conference abstracts and contacted authors.\n\n\nSELECTION CRITERIA\nRandomised controlled trials reporting physical outcomes of PRT for older people were included.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected trials, assessed trial quality and extracted data. Data were pooled where appropriate.\n\n\nMAIN RESULTS\nOne hundred and twenty one trials with 6700 participants were included. In most trials, PRT was performed two to three times per week and at a high intensity. PRT resulted in a small but significant improvement in physical ability (33 trials, 2172 participants; SMD 0.14, 95% CI 0.05 to 0.22). Functional limitation measures also showed improvements: e.g. there was a modest improvement in gait speed (24 trials, 1179 participants, MD 0.08 m/s, 95% CI 0.04 to 0.12); and a moderate to large effect for getting out of a chair (11 trials, 384 participants, SMD -0.94, 95% CI -1.49 to -0.38). PRT had a large positive effect on muscle strength (73 trials, 3059 participants, SMD 0.84, 95% CI 0.67 to 1.00). Participants with osteoarthritis reported a reduction in pain following PRT(6 trials, 503 participants, SMD -0.30, 95% CI -0.48 to -0.13). There was no evidence from 10 other trials (587 participants) that PRT had an effect on bodily pain. Adverse events were poorly recorded but adverse events related to musculoskeletal complaints, such as joint pain and muscle soreness, were reported in many of the studies that prospectively defined and monitored these events. Serious adverse events were rare, and no serious events were reported to be directly related to the exercise programme.\n\n\nAUTHORS' CONCLUSIONS\nThis review provides evidence that PRT is an effective intervention for improving physical functioning in older people, including improving strength and the performance of some simple and complex activities. However, some caution is needed with transferring these exercises for use with clinical populations because adverse events are not adequately reported.",
"title": ""
},
{
"docid": "neg:1840456_1",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "neg:1840456_2",
"text": "The recent expansion of the Internet of Things (IoT) and the consequent explosion in the volume of data produced by smart devices have led to the outsourcing of data to designated data centers. However, to manage these huge data stores, centralized data centers, such as cloud storage cannot afford auspicious way. There are many challenges that must be addressed in the traditional network architecture due to the rapid growth in the diversity and number of devices connected to the internet, which is not designed to provide high availability, real-time data delivery, scalability, security, resilience, and low latency. To address these issues, this paper proposes a novel blockchain-based distributed cloud architecture with a software defined networking (SDN) enable controller fog nodes at the edge of the network to meet the required design principles. The proposed model is a distributed cloud architecture based on blockchain technology, which provides low-cost, secure, and on-demand access to the most competitive computing infrastructures in an IoT network. By creating a distributed cloud infrastructure, the proposed model enables cost-effective high-performance computing. Furthermore, to bring computing resources to the edge of the IoT network and allow low latency access to large amounts of data in a secure manner, we provide a secure distributed fog node architecture that uses SDN and blockchain techniques. Fog nodes are distributed fog computing entities that allow the deployment of fog services, and are formed by multiple computing resources at the edge of the IoT network. We evaluated the performance of our proposed architecture and compared it with the existing models using various performance measures. The results of our evaluation show that performance is improved by reducing the induced delay, reducing the response time, increasing throughput, and the ability to detect real-time attacks in the IoT network with low performance overheads.",
"title": ""
},
{
"docid": "neg:1840456_3",
"text": "Across HCI and social computing platforms, mobile applications that support citizen science, empowering non-experts to explore, collect, and share data have emerged. While many of these efforts have been successful, it remains difficult to create citizen science applications without extensive programming expertise. To address this concern, we present Sensr, an authoring environment that enables people without programming skills to build mobile data collection and management tools for citizen science. We demonstrate how Sensr allows people without technical skills to create mobile applications. Findings from our case study demonstrate that our system successfully overcomes technical constraints and provides a simple way to create mobile data collection tools.",
"title": ""
},
{
"docid": "neg:1840456_4",
"text": "We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.",
"title": ""
},
{
"docid": "neg:1840456_5",
"text": "To present three cases of arterial high flow priapism (HFP) and propose a management algorithm for this condition. We studied three children with post-traumatic arterial HFP (two patients with perineal trauma and one with penis trauma). Spontaneous resolution was observed in all the patients. The time of resolution by a return to a completely flaccid penis was different: 14, 27 and 36 days in each case. Absence of long-term damaging effects of arterial HFP on erectile tissue combined with the possibility of spontaneous resolution associated with blunt perineal trauma are suggestive signs for the introduction of an observation period in the management algorithm of HFP. Such a period may help to avoid unnecessary surgical intervention. Thus, these cases reinforce the decision to manage these patients conservatively and avoid angiographic embolization as a first therapeutic choice.",
"title": ""
},
{
"docid": "neg:1840456_6",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "neg:1840456_7",
"text": "A new overground body-weight support system called ZeroG has been developed that allows patients with severe gait impairments to practice gait and balance activities in a safe, controlled manner. The unloading system is capable of providing up to 300 lb of static support and 150 lb of dynamic (or constant force) support using a custom-series elastic actuator. The unloading system is mounted to a driven trolley, which rides along an overhead rail. We evaluated the performance of ZeroG's unloading system, as well as the trolley tracking system, using benchtop and human-subject testing. Average root-mean-square and peak errors in unloading were 2.2 and 7.2 percent, respectively, over the range of forces tested while trolley tracking errors were less than 3 degrees, indicating the system was able to maintain its position above the subject. We believe training with ZeroG will allow patients to practice activities that are critical to achieving functional independence at home and in the community.",
"title": ""
},
{
"docid": "neg:1840456_8",
"text": "OBJECTIVE\nHumane treatment and care of mentally ill people can be viewed from a historical perspective. Intramural (the institution) and extramural (the community) initiatives are not mutually exclusive.\n\n\nMETHOD\nThe evolution of the psychiatric institution in Canada as the primary method of care is presented from an historical perspective. A province-by-province review of provisions for mentally ill people prior to asylum construction reveals that humanitarian motives and a growing sensitivity to social and medical problems gave rise to institutional psychiatry. The influence of Great Britain, France, and, to a lesser extent, the United States in the construction of asylums in Canada is highlighted. The contemporary redirection of the Canadian mental health system toward \"dehospitalization\" is discussed and delineated.\n\n\nRESULTS\nEarly promoters of asylums were genuinely concerned with alleviating human suffering, which led to the separation of mental health services from the community and from those proffered to the criminal and indigent populations. While the results of the past institutional era were mixed, it is hoped that the \"care\" cycle will not repeat itself in the form of undesirable community alternatives.\n\n\nCONCLUSION\nSeverely psychiatrically disabled individuals can be cared for in the community if appropriate services exist.",
"title": ""
},
{
"docid": "neg:1840456_9",
"text": "The purpose of this review was to present a comprehensive review of the scientific evidence available in the literature regarding the effect of altering the occlusal vertical dimens-ion (OVD) on producing temporomandibular disorders. The authors conducted a PubMed search with the following search terms 'temporoman-dibular disorders', 'occlusal vertical dimension', 'stomatognatic system', 'masticatory muscles' and 'skeletal muscle'. Bibliographies of all retrieved articles were consulted for additional publications. Hand-searched publications from 1938 were included. The literature review revealed a lack of well-designed studies. Traditional beliefs have been based on case reports and anecdotal opinions rather than on well-controlled clinical trials. The available evidence is weak and seems to indicate that the stomatognathic system has the ability to adapt rapidly to moderate changes in occlusal vertical dimension (OVD). Nevertheless, it should be taken into consideration that in some patients mild transient symptoms may occur, but they are most often self-limiting and without major consequence. In conclusion, there is no indication that permanent alteration in the OVD will produce long-lasting TMD symptoms. However, additional studies are needed.",
"title": ""
},
{
"docid": "neg:1840456_10",
"text": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures.",
"title": ""
},
{
"docid": "neg:1840456_11",
"text": "This paper summarizes some of the literature on causal effects in mediation analysis. It presents causally-defined direct and indirect effects for continuous, binary, ordinal, nominal, and count variables. The expansion to non-continuous mediators and outcomes offers a broader array of causal mediation analyses than previously considered in structural equation modeling practice. A new result is the ability to handle mediation by a nominal variable. Examples with a binary outcome and a binary, ordinal or nominal mediator are given using Mplus to compute the effects. The causal effects require strong assumptions even in randomized designs, especially sequential ignorability, which is presumably often violated to some extent due to mediator-outcome confounding. To study the effects of violating this assumption, it is shown how a sensitivity analysis can be carried out. This can be used both in planning a new study and in evaluating the results of an existing study.",
"title": ""
},
{
"docid": "neg:1840456_12",
"text": "For estimating causal effects of treatments, randomized experiments are generally considered the gold standard. Nevertheless, they are often infeasible to conduct for a variety of reasons, such as ethical concerns, excessive expense, or timeliness. Consequently, much of our knowledge of causal effects must come from non-randomized observational studies. This article will advocate the position that observational studies can and should be designed to approximate randomized experiments as closely as possible. In particular, observational studies should be designed using only background information to create subgroups of similar treated and control units, where 'similar' here refers to their distributions of background variables. Of great importance, this activity should be conducted without any access to any outcome data, thereby assuring the objectivity of the design. In many situations, this objective creation of subgroups of similar treated and control units, which are balanced with respect to covariates, can be accomplished using propensity score methods. The theoretical perspective underlying this position will be presented followed by a particular application in the context of the US tobacco litigation. This application uses propensity score methods to create subgroups of treated units (male current smokers) and control units (male never smokers) who are at least as similar with respect to their distributions of observed background characteristics as if they had been randomized. The collection of these subgroups then 'approximate' a randomized block experiment with respect to the observed covariates.",
"title": ""
},
{
"docid": "neg:1840456_13",
"text": "Research paper recommenders emerged over the last decade to ease finding publications relating to researchers' area of interest. The challenge was not just to provide researchers with very rich publications at any time, any place and in any form but to also offer the right publication to the right researcher in the right way. Several approaches exist in handling paper recommender systems. However, these approaches assumed the availability of the whole contents of the recommending papers to be freely accessible, which is not always true due to factors such as copyright restrictions. This paper presents a collaborative approach for research paper recommender system. By leveraging the advantages of collaborative filtering approach, we utilize the publicly available contextual metadata to infer the hidden associations that exist between research papers in order to personalize recommendations. The novelty of our proposed approach is that it provides personalized recommendations regardless of the research field and regardless of the user's expertise. Using a publicly available dataset, our proposed approach has recorded a significant improvement over other baseline methods in measuring both the overall performance and the ability to return relevant and useful publications at the top of the recommendation list.",
"title": ""
},
{
"docid": "neg:1840456_14",
"text": "Online question and answer (Q&A) services are facing key challenges to motivate domain experts to provide quick and high-quality answers. Recent systems seek to engage real-world experts by allowing them to set a price on their answers. This leads to a \"targeted\" Q&A model where users to ask questions to a target expert by paying the price. In this paper, we perform a case study on two emerging targeted Q&A systems Fenda (China) and Whale (US) to understand how monetary incentives affect user behavior. By analyzing a large dataset of 220K questions (worth 1 million USD), we find that payments indeed enable quick answers from experts, but also drive certain users to game the system for profits. In addition, this model requires users (experts) to proactively adjust their price to make profits. People who are unwilling to lower their prices are likely to hurt their income and engagement over time.",
"title": ""
},
{
"docid": "neg:1840456_15",
"text": "This paper explores the use of set expansion (SE) to improve question answering (QA) when the expected answer is a list of entities belonging to a certain class. Given a small set of seeds, SE algorithms mine textual resources to produce an extended list including additional members of the class represented by the seeds. We explore the hypothesis that a noise-resistant SE algorithm can be used to extend candidate answers produced by a QA system and generate a new list of answers that is better than the original list produced by the QA system. We further introduce a hybrid approach which combines the original answers from the QA system with the output from the SE algorithm. Experimental results for several state-of-the-art QA systems show that the hybrid system performs better than the QA systems alone when tested on list question data from past TREC evaluations.",
"title": ""
},
{
"docid": "neg:1840456_16",
"text": "The objective of this paper is to evaluate “human action recognition without human”. Motion representation is frequently discussed in human action recognition. We have examined several sophisticated options, such as dense trajectories (DT) and the two-stream convolutional neural network (CNN). However, some features from the background could be too strong, as shown in some recent studies on human action recognition. Therefore, we considered whether a background sequence alone can classify human actions in current large-scale action datasets (e.g., UCF101). In this paper, we propose a novel concept for human action analysis that is named “human action recognition without human”. An experiment clearly shows the effect of a background sequence for understanding an action label.",
"title": ""
},
{
"docid": "neg:1840456_17",
"text": "We propose a novel framework for controllable natural language transformation. Realizing that the requirement of parallel corpus is practically unsustainable for controllable generation tasks, an unsupervised training scheme is introduced. The crux of the framework is a deep neural encoder-decoder that is reinforced with text-transformation knowledge through auxiliary modules (called scorers). These scorers, based on off-the-shelf language processing tools, decide the learning scheme of the encoder-decoder based on its actions. We apply this framework for the text-transformation task of formalizing an input text by improving its readability grade; the degree of required formalization can be controlled by the user at run-time. Experiments on public datasets demonstrate the efficacy of our model towards: (a) transforming a given text to a more formal style, and (b) varying the amount of formalness in the output text based on the specified input control. Our code and datasets are released for academic use.",
"title": ""
},
{
"docid": "neg:1840456_18",
"text": "Convolutional Neural Networks (CNNs) have been applied to visual tracking with demonstrated success in recent years. Most CNN-based trackers utilize hierarchical features extracted from a certain layer to represent the target. However, features from a certain layer are not always effective for distinguishing the target object from the backgrounds especially in the presence of complicated interfering factors (e.g., heavy occlusion, background clutter, illumination variation, and shape deformation). In this work, we propose a CNN-based tracking algorithm which hedges deep features from different CNN layers to better distinguish target objects and background clutters. Correlation filters are applied to feature maps of each CNN layer to construct a weak tracker, and all weak trackers are hedged into a strong one. For robust visual tracking, we propose a hedge method to adaptively determine weights of weak classifiers by considering both the difference between the historical as well as instantaneous performance, and the difference among all weak trackers over time. In addition, we design a siamese network to define the loss of each weak tracker for the proposed hedge method. Extensive experiments on large benchmark datasets demonstrate the effectiveness of the proposed algorithm against the state-of-the-art tracking methods.",
"title": ""
},
{
"docid": "neg:1840456_19",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
}
] |
1840457 | A survey on the communication architectures in smart grid | [
{
"docid": "pos:1840457_0",
"text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.",
"title": ""
}
] | [
{
"docid": "neg:1840457_0",
"text": "Power and energy have become increasingly important concerns in the design and implementation of today's multicore/manycore chips. In this paper, we present two priority-based CPU scheduling algorithms, Algorithm Cache Miss Priority CPU Scheduler (CM-PCS) and Algorithm Context Switch Priority CPU Scheduler (CS-PCS), which take advantage of often ignored dynamic performance data, in order to reduce power consumption by over 20 percent with a significant increase in performance. Our algorithms utilize Linux cpusets and cores operating at different fixed frequencies. Many other techniques, including dynamic frequency scaling, can lower a core's frequency during the execution of a non-CPU intensive task, thus lowering performance. Our algorithms match processes to cores better suited to execute those processes in an effort to lower the average completion time of all processes in an entire task, thus improving performance. They also consider a process's cache miss/cache reference ratio, number of context switches and CPU migrations, and system load. Finally, our algorithms use dynamic process priorities as scheduling criteria. We have tested our algorithms using a real AMD Opteron 6134 multicore chip and measured results directly using the “KillAWatt” meter, which samples power periodically during execution. Our results show not only a power (energy/execution time) savings of 39 watts (21.43 percent) and 38 watts (20.88 percent), but also a significant improvement in the performance, performance per watt, and execution time · watt (energy) for a task consisting of 24 concurrently executing benchmarks, when compared to the default Linux scheduler and CPU frequency scaling governor.",
"title": ""
},
{
"docid": "neg:1840457_1",
"text": "Data from service and manufacturing sectors is increasing sharply and lifts up a growing enthusiasm for the notion of Big Data. This paper investigates representative Big Data applications from typical services like finance & economics, healthcare, Supply Chain Management (SCM), and manufacturing sector. Current technologies from key aspects of storage technology, data processing technology, data visualization technique, Big Data analytics, as well as models and algorithms are reviewed. This paper then provides a discussion from analyzing current movements on the Big Data for SCM in service and manufacturing world-wide including North America, Europe, and Asia Pacific region. Current challenges, opportunities, and future perspectives such as data collection methods, data transmission, data storage, processing technologies for Big Data, Big Data-enabled decision-making models, as well as Big Data interpretation and application are highlighted. Observations and insights from this paper could be referred by academia and practitioners when implementing Big Data analytics in the service and manufacturing sectors. 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840457_2",
"text": "Phishing is defined as mimicking a creditable company's website aiming to take private information of a user. In order to eliminate phishing, different solutions proposed. However, only one single magic bullet cannot eliminate this threat completely. Data mining is a promising technique used to detect phishing attacks. In this paper, an intelligent system to detect phishing attacks is presented. We used different data mining techniques to decide categories of websites: legitimate or phishing. Different classifiers were used in order to construct accurate intelligent system for phishing website detection. Classification accuracy, area under receiver operating characteristic (ROC) curves (AUC) and F-measure is used to evaluate the performance of the data mining techniques. Results showed that Random Forest has outperformed best among the classification methods by achieving the highest accuracy 97.36%. Random forest runtimes are quite fast, and it can deal with different websites for phishing detection.",
"title": ""
},
{
"docid": "neg:1840457_3",
"text": "We present Scalable Host-tree Embeddings for Efficient Partitioning (Sheep), a distributed graph partitioning algorithm capable of handling graphs that far exceed main memory. Sheep produces high quality edge partitions an order of magnitude faster than both state of the art offline (e.g., METIS) and streaming partitioners (e.g., Fennel). Sheep’s partitions are independent of the input graph distribution, which means that graph elements can be assigned to processing nodes arbitrarily without affecting the partition quality. Sheep transforms the input graph into a strictly smaller elimination tree via a distributed map-reduce operation. By partitioning this tree, Sheep finds an upper-bounded communication volume partitioning of the original graph. We describe the Sheep algorithm and analyze its spacetime requirements, partition quality, and intuitive characteristics and limitations. We compare Sheep to contemporary partitioners and demonstrate that Sheep creates competitive partitions, scales to larger graphs, and has better runtime.",
"title": ""
},
{
"docid": "neg:1840457_4",
"text": "We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.",
"title": ""
},
{
"docid": "neg:1840457_5",
"text": "This letter presents a compact broadband microstrip-line-fed sleeve monopole antenna for application in the DTV system. The design of meandering the monopole into a compact structure is applied for size reduction. By properly selecting the length and spacing of the sleeve, the broadband operation for the proposed design can be achieved, and the obtained impedance bandwidth covers the whole DTV (470862 MHz) band. Most importantly, the matching condition over a wide frequency range can be performed well even when a small ground-plane length is used; meanwhile, a small variation in the impedance bandwidth is observed for the ground-plane length varied in a great range.",
"title": ""
},
{
"docid": "neg:1840457_6",
"text": "Exploiting network data (i.e., graphs) is a rather particular case of data mining. The size and relevance of network domains justifies research on graph mining, but also brings forth severe complications. Computational aspects like scalability and parallelism have to be reevaluated, and well as certain aspects of the data mining process. One of those are the methodologies used to evaluate graph mining methods, particularly when processing large graphs. In this paper we focus on the evaluation of a graph mining task known as Link Prediction. First we explore the available solutions in traditional data mining for that purpose, discussing which methods are most appropriate. Once those are identified, we argue about their capabilities and limitations for producing a faithful and useful evaluation. Finally, we introduce a novel modification to a traditional evaluation methodology with the goal of adapting it to the problem of Link Prediction on large graphs.",
"title": ""
},
{
"docid": "neg:1840457_7",
"text": "We use a deep learning model trained only on a patient’s blood oxygenation data (measurable with an inexpensive fingertip sensor) to predict impending hypoxemia (low blood oxygen) more accurately than trained anesthesiologists with access to all the data recorded in a modern operating room. We also provide a simple way to visualize the reason why a patient’s risk is low or high by assigning weight to the patient’s past blood oxygen values. This work has the potential to provide cuttingedge clinical decision support in low-resource settings, where rates of surgical complication and death are substantially greater than in high-resource areas.",
"title": ""
},
{
"docid": "neg:1840457_8",
"text": "Production cars are designed to understeer and rarely do they oversteer. If a car could automatically compensate for an understeer/oversteer problem, the driver would enjoy nearly neutral steering under varying operating conditions. Four-wheel steering is a serious effort on the part of automotive design engineers to provide near-neutral steering. Also in situations like low speed cornering, vehicle parking and driving in city conditions with heavy traffic in tight spaces, driving would be very difficult due to vehicle’s larger wheelbase and track width. Hence there is a requirement of a mechanism which result in less turning radius and it can be achieved by implementing four wheel steering mechanism instead of regular two wheel steering. In this project Maruti Suzuki 800 is considered as a benchmark vehicle. The main aim of this project is to turn the rear wheels out of phase to the front wheels. In order to achieve this, a mechanism which consists of two bevel gears and intermediate shaft which transmit 100% torque as well turns rear wheels in out of phase was developed. The mechanism was modelled using CATIA and the motion simulation was done using ADAMS. A physical prototype was realised. The prototype was tested for its cornering ability through constant radius test and was found 50% reduction in turning radius and the vehicle was operated at low speed of 10 kmph.",
"title": ""
},
{
"docid": "neg:1840457_9",
"text": "Wikis are collaborative systems in which virtually anyone can edit anything. Although wikis have become highly popular in many domains, their mutable nature often leads them to be distrusted as a reliable source of information. Here we describe a social dynamic analysis tool called WikiDashboard which aims to improve social transparency and accountability on Wikipedia articles. Early reactions from users suggest that the increased transparency afforded by the tool can improve the interpretation, communication, and trustworthiness of Wikipedia articles.",
"title": ""
},
{
"docid": "neg:1840457_10",
"text": "An algorithm is presented for the estimation of the fundamental frequency (F0) of speech or musical sounds. It is based on the well-known autocorrelation method with a number of modifications that combine to prevent errors. The algorithm has several desirable features. Error rates are about three times lower than the best competing methods, as evaluated over a database of speech recorded together with a laryngograph signal. There is no upper limit on the frequency search range, so the algorithm is suited for high-pitched voices and music. The algorithm is relatively simple and may be implemented efficiently and with low latency, and it involves few parameters that must be tuned. It is based on a signal model (periodic signal) that may be extended in several ways to handle various forms of aperiodicity that occur in particular applications. Finally, interesting parallels may be drawn with models of auditory processing.",
"title": ""
},
{
"docid": "neg:1840457_11",
"text": "A systematic, tiered approach to assess the safety of engineered nanomaterials (ENMs) in foods is presented. The ENM is first compared to its non-nano form counterpart to determine if ENM-specific assessment is required. Of highest concern from a toxicological perspective are ENMs which have potential for systemic translocation, are insoluble or only partially soluble over time or are particulate and bio-persistent. Where ENM-specific assessment is triggered, Tier 1 screening considers the potential for translocation across biological barriers, cytotoxicity, generation of reactive oxygen species, inflammatory response, genotoxicity and general toxicity. In silico and in vitro studies, together with a sub-acute repeat-dose rodent study, could be considered for this phase. Tier 2 hazard characterisation is based on a sentinel 90-day rodent study with an extended range of endpoints, additional parameters being investigated case-by-case. Physicochemical characterisation should be performed in a range of food and biological matrices. A default assumption of 100% bioavailability of the ENM provides a 'worst case' exposure scenario, which could be refined as additional data become available. The safety testing strategy is considered applicable to variations in ENM size within the nanoscale and to new generations of ENM.",
"title": ""
},
{
"docid": "neg:1840457_12",
"text": "The deployment of wireless sensor networks and mobile ad-hoc networks in applications such as emergency services, warfare and health monitoring poses the threat of various cyber hazards, intrusions and attacks as a consequence of these networks’ openness. Among the most significant research difficulties in such networks safety is intrusion detection, whose target is to distinguish between misuse and abnormal behavior so as to ensure secure, reliable network operations and services. Intrusion detection is best delivered by multi-agent system technologies and advanced computing techniques. To date, diverse soft computing and machine learning techniques in terms of computational intelligence have been utilized to create Intrusion Detection and Prevention Systems (IDPS), yet the literature does not report any state-ofthe-art reviews investigating the performance and consequences of such techniques solving wireless environment intrusion recognition issues as they gain entry into cloud computing. The principal contribution of this paper is a review and categorization of existing IDPS schemes in terms of traditional artificial computational intelligence with a multi-agent support. The significance of the techniques and methodologies and their performance and limitations are additionally analyzed in this study, and the limitations are addressed as challenges to obtain a set of requirements for IDPS in establishing a collaborative-based wireless IDPS (Co-WIDPS) architectural design. It amalgamates a fuzzy reinforcement learning knowledge management by creating a far superior technological platform that is far more accurate in detecting attacks. In conclusion, we elaborate on several key future research topics with the potential to accelerate the progress and deployment of computational intelligence based Co-WIDPSs. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840457_13",
"text": "In this paper, we give a short introduction to machine learning and survey its applications in radiology. We focused on six categories of applications in radiology: medical image segmentation, registration, computer aided detection and diagnosis, brain function or activity analysis and neurological disease diagnosis from fMR images, content-based image retrieval systems for CT or MRI images, and text analysis of radiology reports using natural language processing (NLP) and natural language understanding (NLU). This survey shows that machine learning plays a key role in many radiology applications. Machine learning identifies complex patterns automatically and helps radiologists make intelligent decisions on radiology data such as conventional radiographs, CT, MRI, and PET images and radiology reports. In many applications, the performance of machine learning-based automatic detection and diagnosis systems has shown to be comparable to that of a well-trained and experienced radiologist. Technology development in machine learning and radiology will benefit from each other in the long run. Key contributions and common characteristics of machine learning techniques in radiology are discussed. We also discuss the problem of translating machine learning applications to the radiology clinical setting, including advantages and potential barriers.",
"title": ""
},
{
"docid": "neg:1840457_14",
"text": "This paper assesses the impact of financial reforms in Zimbabwe on savings and credit availability to small and medium scale enterprises (SMEs) and the poor. We established that the reforms improved domestic savings mobilization due to high deposit rates, the emergence of new financial institutions and products and the general increase in real incomes after the 1990 economic reforms. The study uncovered that inflation and real income were the major determinants of savings during the sample period. High lending rates and the use of conventional lending methodologies by banks restricted access to credit by the SMEs and the poor. JEL Classification Numbers: E21, O16.",
"title": ""
},
{
"docid": "neg:1840457_15",
"text": "A viable fully on-line adaptive brain computer interface (BCI) is introduced. On-line experiments with nine naive and able-bodied subjects were carried out using a continuously adaptive BCI system. The data were analyzed and the viability of the system was studied. The BCI was based on motor imagery, the feature extraction was performed with an adaptive autoregressive model and the classifier used was an adaptive quadratic discriminant analysis. The classifier was on-line updated by an adaptive estimation of the information matrix (ADIM). The system was also able to provide continuous feedback to the subject. The success of the feedback was studied analyzing the error rate and mutual information of each session and this analysis showed a clear improvement of the subject's control of the BCI from session to session.",
"title": ""
},
{
"docid": "neg:1840457_16",
"text": "OBJECTIVE\nDevelopment of a rational and enforceable basis for controlling the impact of cannabis use on traffic safety.\n\n\nMETHODS\nAn international working group of experts on issues related to drug use and traffic safety evaluated evidence from experimental and epidemiological research and discussed potential approaches to developing per se limits for cannabis.\n\n\nRESULTS\nIn analogy to alcohol, finite (non-zero) per se limits for delta-9-tetrahydrocannabinol (THC) in blood appear to be the most effective approach to separating drivers who are impaired by cannabis use from those who are no longer under the influence. Limited epidemiological studies indicate that serum concentrations of THC below 10 ng/ml are not associated with an elevated accident risk. A comparison of meta-analyses of experimental studies on the impairment of driving-relevant skills by alcohol or cannabis suggests that a THC concentration in the serum of 7-10 ng/ml is correlated with an impairment comparable to that caused by a blood alcohol concentration (BAC) of 0.05%. Thus, a suitable numerical limit for THC in serum may fall in that range.\n\n\nCONCLUSIONS\nThis analysis offers an empirical basis for a per se limit for THC that allows identification of drivers impaired by cannabis. The limited epidemiological data render this limit preliminary.",
"title": ""
},
{
"docid": "neg:1840457_17",
"text": "Business process modeling is a big part in the industry, mainly to document, analyze, and optimize workflows. Currently, the EPC process modeling notation is used very wide, because of the excellent integration in the ARIS Toolset and the long existence of this process language. But as a change of time, BPMN gets popular and the interest in the industry and companies gets growing up. It is standardized, has more expressiveness than EPC and the tool support increase very rapidly. With having tons of existing EPC process models; a big need from the industry is to have an automated transformation from EPC to BPMN. This paper specified a direct approach of a transformation from EPC process model elements to BPMN. Thereby it is tried to map every construct in EPC fully automated to BPMN. But as it is described, not for every process element works this out, so in addition, some extensions and semantics rules are defined.",
"title": ""
},
{
"docid": "neg:1840457_18",
"text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.",
"title": ""
},
{
"docid": "neg:1840457_19",
"text": "In this paper, we present a detailed design of dynamic video segmentation network (DVSNet) for fast and efficient semantic video segmentation. DVSNet consists of two convolutional neural networks: a segmentation network and a flow network. The former generates highly accurate semantic segmentations, but is deeper and slower. The latter is much faster than the former, but its output requires further processing to generate less accurate semantic segmentations. We explore the use of a decision network to adaptively assign different frame regions to different networks based on a metric called expected confidence score. Frame regions with a higher expected confidence score traverse the flow network. Frame regions with a lower expected confidence score have to pass through the segmentation network. We have extensively performed experiments on various configurations of DVSNet, and investigated a number of variants for the proposed decision network. The experimental results show that our DVSNet is able to achieve up to 70.4% mIoU at 19.8 fps on the Cityscape dataset. A high speed version of DVSNet is able to deliver an fps of 30.4 with 63.2% mIoU on the same dataset. DVSNet is also able to reduce up to 95% of the computational workloads.",
"title": ""
}
] |
1840458 | Cross-Point Architecture for Spin-Transfer Torque Magnetic Random Access Memory | [
{
"docid": "pos:1840458_0",
"text": "This paper reports a 45nm spin-transfer-torque (STT) MRAM embedded into a standard CMOS logic platform that employs low-power (LP) transistors and Cu/low-k BEOL. We believe that this is the first-ever demonstration of embedded STT MRAM that is fully compatible with the 45nm logic technology. To ensure the switching margin, a novel Ȝreverse-connectionȝ 1T/1MT cell has been developed with a cell size of 0.1026 µm2. This cell is utilized to build embedded memory macros up to 32 Mbits in density. Device attributes and design windows have been examined by considering PVT variations to secure operating margins. Promising early reliability data on endurance, read disturb, and thermal stability have been obtained.",
"title": ""
}
] | [
{
"docid": "neg:1840458_0",
"text": "We describe the capabilities of and algorithms used in a ne w FPGA CAD tool, Versatile Place and Route (VPR). In terms of minimizing routing area, VPR outperforms all published FPGA place and route tools to which we can compare. Although the algorithms used are based on pre viously known approaches, we present se veral enhancements that impro ve run-time and quality . We present placement and routing results on a ne w set of lar ge circuits to allo w future benchmark comparisons of FPGA place and route tools on circuit sizes more typical of today’ s industrial designs. VPR is capable of tar geting a broad range of FPGA architectures, and the source code is publicly a vailable. It and the associated netlist translation / clustering tool VPACK have already been used in a number of research projects w orldwide, and should be useful in man y areas of FPGA architecture research.",
"title": ""
},
{
"docid": "neg:1840458_1",
"text": "We use a hierarchical Bayesian approach to model user preferences in different contexts or settings. Unlike many previous recommenders, our approach is content-based. We assume that for each context, a user has a different set of preference weights which are linked by a common, “generic context” set of weights. The approach uses Expectation Maximization (EM) to estimate both the generic context weights and the context specific weights. This improves upon many current recommender systems that do not incorporate context into the recommendations they provide. In this paper, we show that by considering contextual information, we can improve our recommendations, demonstrating that it is useful to consider context in giving ratings. Because the approach does not rely on connecting users via collaborative filtering, users are able to interpret contexts in different ways and invent their own",
"title": ""
},
{
"docid": "neg:1840458_2",
"text": "In the field of Evolutionary Computation, a common myth that “An Evolutionary Algorithm (EA) will outperform a local search algorithm, given enough runtime and a large-enough population” exists. We believe that this is not necessarily true and challenge the statement with several simple considerations. We then investigate the population size parameter of EAs, as this is the element in the above claim that can be controlled. We conduct a related work study, which substantiates the assumption that there should be an optimal setting for the population size at which a specific EA would perform best on a given problem instance and computational budget. Subsequently, we carry out a large-scale experimental study on 68 instances of the Traveling Salesman Problem with static population sizes that are powers of two between (1+2) and (262 144 + 524 288) EAs as well as with adaptive population sizes. We find that analyzing the performance of the different setups over runtime supports our point of view and the existence of optimal finite population size settings.",
"title": ""
},
{
"docid": "neg:1840458_3",
"text": "The goal of Open Information Extraction (OIE) is to extract surface relations and their arguments from naturallanguage text in an unsupervised, domainindependent manner. In this paper, we propose MinIE, an OIE system that aims to provide useful, compact extractions with high precision and recall. MinIE approaches these goals by (1) representing information about polarity, modality, attribution, and quantities with semantic annotations instead of in the actual extraction, and (2) identifying and removing parts that are considered overly specific. We conducted an experimental study with several real-world datasets and found that MinIE achieves competitive or higher precision and recall than most prior systems, while at the same time producing shorter, semantically enriched extractions.",
"title": ""
},
{
"docid": "neg:1840458_4",
"text": "Vehicular drivers and shift workers in industry are at most risk of handling life critical tasks. The drivers traveling long distances or when they are tired, are at risk of a meeting an accident. The early hours of the morning and the middle of the afternoon are the peak times for fatigue driven accidents. The difficulty in determining the incidence of fatigue-related accidents is due, at least in part, to the difficulty in identifying fatigue as a causal or causative factor in accidents. In this paper we propose an alternative approach for fatigue detection in vehicular drivers using Respiration (RSP) signal to reduce the losses of the lives and vehicular accidents those occur due to cognitive fatigue of the driver. We are using basic K-means algorithm with proposed two modifications as classifier for detection of Respiration signal two state fatigue data recorded from the driver. The K-means classifiers [11] were trained and tested for wavelet feature of Respiration signal. The extracted features were treated as individual decision making parameters. From test results it could be found that some of the wavelet features could fetch 100 % classification accuracy.",
"title": ""
},
{
"docid": "neg:1840458_5",
"text": "In machine learning often a tradeoff must be made between accuracy and intelligibility. More accurate models such as boosted trees, random forests, and neural nets usually are not intelligible, but more intelligible models such as logistic regression, naive-Bayes, and single decision trees often have significantly worse accuracy. This tradeoff sometimes limits the accuracy of models that can be applied in mission-critical applications such as healthcare where being able to understand, validate, edit, and trust a learned model is important. We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. In the pneumonia risk prediction case study, the intelligible model uncovers surprising patterns in the data that previously had prevented complex learned models from being fielded in this domain, but because it is intelligible and modular allows these patterns to be recognized and removed. In the 30-day hospital readmission case study, we show that the same methods scale to large datasets containing hundreds of thousands of patients and thousands of attributes while remaining intelligible and providing accuracy comparable to the best (unintelligible) machine learning methods.",
"title": ""
},
{
"docid": "neg:1840458_6",
"text": "Friction characteristics between the wafer and the polishing pad play an important role in the chemical-mechanical planarization (CMP) process. In this paper, a wafer/pad friction modeling and monitoring scheme for the linear CMP process is presented. Kinematic analysis of the linear CMP system is investigated and a distributed LuGre dynamic friction model is utilized to capture the friction forces generated by the wafer/pad interactions. The frictional torques of both the polisher spindle and the roller systems are used to monitor in situ the changes of the friction coefficient during a CMP process. Effects of pad conditioning and patterned wafer topography on the wafer/pad friction are also analyzed and discussed. The proposed friction modeling and monitoring scheme can be further used for real-time CMP monitoring and process fault diagnosis.",
"title": ""
},
{
"docid": "neg:1840458_7",
"text": "Research in child fatalities because of abuse and neglect has continued to increase, yet the mechanisms of the death incident and risk factors for these deaths remain unclear. The purpose of this study was to systematically examine the types of neglect that resulted in children's deaths as determined by child welfare and a child death review board. This case review study reviewed 22 years of data (n=372) of child fatalities attributed solely to neglect taken from a larger sample (N=754) of abuse and neglect death cases spanning the years 1987-2008. The file information reviewed was provided by the Oklahoma Child Death Review Board (CDRB) and the Oklahoma Department of Human Services (DHS) Division of Children and Family Services. Variables of interest were child age, ethnicity, and birth order; parental age and ethnicity; cause of death as determined by child protective services (CPS); and involvement with DHS at the time of the fatal event. Three categories of fatal neglect--supervisory neglect, deprivation of needs, and medical neglect--were identified and analyzed. Results found an overwhelming presence of supervisory neglect in child neglect fatalities and indicated no significant differences between children living in rural and urban settings. Young children and male children comprised the majority of fatalities, and African American and Native American children were over-represented in the sample when compared to the state population. This study underscores the critical need for prevention and educational programming related to appropriate adult supervision and adequate safety measures to prevent a child's death because of neglect.",
"title": ""
},
{
"docid": "neg:1840458_8",
"text": "High profile attacks such as Stuxnet and the cyber attack on the Ukrainian power grid have increased research in Industrial Control System (ICS) and Supervisory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for researchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging security solutions difficult and slows down progress towards effective and reusable solutions. This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that contain labels to provide the ground truth for supervised machine learning. To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.",
"title": ""
},
{
"docid": "neg:1840458_9",
"text": "Augmented Reality can be of immense benefit to the construction industry. The oft-cited benefits of AR in construction industry include real time visualization of projects, project monitoring by overlaying virtual models on actual built structures and onsite information retrieval. But this technology is restricted by the high cost and limited portability of the devices. Further, problems with real time and accurate tracking in a construction environment hinder its broader application. To enable utilization of augmented reality on a construction site, a low cost augmented reality framework based on the Google Cardboard visor is proposed. The current applications available for Google cardboard has several limitations in delivering an AR experience relevant to construction requirements. To overcome these limitations Unity game engine, with the help of Vuforia & Cardboard SDK, is used to develop an application environment which can be used for location and orientation specific visualization and planning of work at construction workface. The real world image is captured through the smart-phone camera input and blended with the stereo input of the 3D models to enable a full immersion experience. The application is currently limited to marker based tracking where the 3D models are triggered onto the user’s view upon scanning an image which is registered with a corresponding 3D model preloaded into the application. A gaze input user interface is proposed which enables the user to interact with the augmented models. Finally usage of AR app while traversing the construction site is illustrated.",
"title": ""
},
{
"docid": "neg:1840458_10",
"text": "person’s ability to focus on his or her primary task. Distractions occur especially in mobile environments, because walking, driving, or other real-world interactions often preoccupy the user. A pervasivecomputing environment that minimizes distraction must be context aware, and a pervasive-computing system must know the user’s state to accommodate his or her needs. Context-aware applications provide at least two fundamental services: spatial awareness and temporal awareness. Spatially aware applications consider a user’s relative and absolute position and orientation. Temporally aware applications consider the time schedules of public and private events. With an interdisciplinary class of Carnegie Mellon University (CMU) students, we developed and implemented a context-aware, pervasive-computing environment that minimizes distraction and facilitates collaborative design.",
"title": ""
},
{
"docid": "neg:1840458_11",
"text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.",
"title": ""
},
{
"docid": "neg:1840458_12",
"text": "Social networks consist of various communities that host members sharing common characteristics. Often some members of one community are also members of other communities. Such shared membership of different communities leads to overlapping communities. Detecting such overlapping communities is a challenging and computationally intensive problem. In this paper, we investigate the usability of high performance computing in the area of social networks and community detection. We present highly scalable variants of a community detection algorithm called Speaker-listener Label Propagation Algorithm (SLPA). We show that despite of irregular data dependencies in the computation, parallel computing paradigms can significantly speed up the detection of overlapping communities of social networks which is computationally expensive. We show by experiments, how various parallel computing architectures can be utilized to analyze large social network data on both shared memory machines and distributed memory machines, such as IBM Blue Gene.",
"title": ""
},
{
"docid": "neg:1840458_13",
"text": "Recently, the concept of olfaction-enhanced multimedia applications has gained traction as a step toward further enhancing user quality of experience. The next generation of rich media services will be immersive and multisensory, with olfaction playing a key role. This survey reviews current olfactory-related research from a number of perspectives. It introduces and explains relevant olfactory psychophysical terminology, knowledge of which is necessary for working with olfaction as a media component. In addition, it reviews and highlights the use of, and potential for, olfaction across a number of application domains, namely health, tourism, education, and training. A taxonomy of research and development of olfactory displays is provided in terms of display type, scent generation mechanism, application area, and strengths/weaknesses. State of the art research works involving olfaction are discussed and associated research challenges are proposed.",
"title": ""
},
{
"docid": "neg:1840458_14",
"text": "...................................................................................................................... iii ACKNOWLEDGEMENTS .................................................................................................v CHAPTER",
"title": ""
},
{
"docid": "neg:1840458_15",
"text": "Over the last few years or so, the use of artificial neural networks (ANNs) has increased in many areas of engineering. In particular, ANNs have been applied to many geotechnical engineering problems and have demonstrated some degree of success. A review of the literature reveals that ANNs have been used successfully in pile capacity prediction, modelling soil behaviour, site characterisation, earth retaining structures, settlement of structures, slope stability, design of tunnels and underground openings, liquefaction, soil permeability and hydraulic conductivity, soil compaction, soil swelling and classification of soils. The objective of this paper is to provide a general view of some ANN applications for solving some types of geotechnical engineering problems. It is not intended to describe the ANNs modelling issues in geotechnical engineering. The paper also does not intend to cover every single application or scientific paper that found in the literature. For brevity, some works are selected to be described in some detail, while others are acknowledged for reference purposes. The paper then discusses the strengths and limitations of ANNs compared with the other modelling approaches.",
"title": ""
},
{
"docid": "neg:1840458_16",
"text": "A key issue in the direct torque control of permanent magnet brushless DC motors is the estimation of the instantaneous electromagnetic torque, while sensorless control is often advantageous. A sliding mode observer is employed to estimate the non-sinusoidal back-emf waveform, and a simplified extended Kalman filter is used to estimate the rotor speed. Both are combined to calculate the instantaneous electromagnetic torque, the effectiveness of this approach being validated by simulations and measurements.",
"title": ""
},
{
"docid": "neg:1840458_17",
"text": "Gravity is the only component of Earth environment that remained constant throughout the entire process of biological evolution. However, it is still unclear how gravity affects plant growth and development. In this study, an in vitro cell culture of Arabidopsis thaliana was exposed to different altered gravity conditions, namely simulated reduced gravity (simulated microgravity, simulated Mars gravity) and hypergravity (2g), to study changes in cell proliferation, cell growth, and epigenetics. The effects after 3, 14, and 24-hours of exposure were evaluated. The most relevant alterations were found in the 24-hour treatment, being more significant for simulated reduced gravity than hypergravity. Cell proliferation and growth were uncoupled under simulated reduced gravity, similarly, as found in meristematic cells from seedlings grown in real or simulated microgravity. The distribution of cell cycle phases was changed, as well as the levels and gene transcription of the tested cell cycle regulators. Ribosome biogenesis was decreased, according to levels and gene transcription of nucleolar proteins and the number of inactive nucleoli. Furthermore, we found alterations in the epigenetic modifications of chromatin. These results show that altered gravity effects include a serious disturbance of cell proliferation and growth, which are cellular functions essential for normal plant development.",
"title": ""
},
{
"docid": "neg:1840458_18",
"text": "Monopulse is a classical radar technique [1] of precise direction finding of a source or target. The concept can be used both in radar applications as well as in modern communication techniques. The information contained in antenna sidelobes normally disturbs the determination of DOA in the case of a classical monopulse system. The suitable combination of amplitudeand phase-monopulse algorithm leads to the novel complex monopulse algorithm (CMP), which also can utilise information from the sidelobes by using the phase shift of the signals in the sidelobes in relation to the mainlobes.",
"title": ""
},
{
"docid": "neg:1840458_19",
"text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.",
"title": ""
}
] |
1840459 | 28 GHz channel modeling using 3D ray-tracing in urban environments | [
{
"docid": "pos:1840459_0",
"text": "This deliverable describes WINNER II channel models for link and system level simulations. Both generic and clustered delay line models are defined for selected propagation scenarios. Disclaimer: The channel models described in this deliverable are based on a literature survey and measurements performed during this project. The authors are not responsible for any loss, damage or expenses caused by potential errors or inaccuracies in the models or in the deliverable. Executive Summary This deliverable presents WINNER II channel models for link level and system level simulations of local area, metropolitan area, and wide area wireless communication systems. The models have been evolved from the WINNER I channel models described in WINNER I deliverable D5.4 and WINNER II interim channel models described in deliverable D1.1.1. The covered propagation scenarios are indoor office, large indoor hall, indoor-to-outdoor, urban micro-cell, bad urban micro-cell, outdoor-to-indoor, stationary feeder, suburban macro-cell, urban macro-cell, rural macro-cell, and rural moving networks. The generic WINNER II channel model follows a geometry-based stochastic channel modelling approach, which allows creating of an arbitrary double directional radio channel model. The channel models are antenna independent, i.e., different antenna configurations and different element patterns can be inserted. The channel parameters are determined stochastically, based on statistical distributions extracted from channel measurement. The distributions are defined for, e.g., delay spread, delay values, angle spread, shadow fading, and cross-polarisation ratio. For each channel snapshot the channel parameters are calculated from the distributions. Channel realisations are generated by summing contributions of rays with specific channel parameters like delay, power, angle-of-arrival and angle-of-departure. Different scenarios are modelled by using the same approach, but different parameters. The parameter tables for each scenario are included in this deliverable. Clustered delay line (CDL) models with fixed large-scale and small-scale parameters have also been created for calibration and comparison of different simulations. The parameters of the CDL models are based on expectation values of the generic models. Several measurement campaigns provide the background for the parameterisation of the propagation scenarios for both line-of-sight (LOS) and non-LOS (NLOS) conditions. These measurements were conducted by seven partners with different devices. The developed models are based on both literature and extensive measurement campaigns that have been carried out within the WINNER I and WINNER II projects. The novel features of the WINNER models are its parameterisation, using of the same modelling approach for both indoor and outdoor environments, new scenarios like outdoor-to-indoor and indoor-to-outdoor, …",
"title": ""
},
{
"docid": "pos:1840459_1",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] | [
{
"docid": "neg:1840459_0",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840459_1",
"text": "Drivable free space information is vital for autonomous vehicles that have to plan evasive maneu vers in realtime. In this paper, we present a new efficient met hod for environmental free space detection with laser scann er based on 2D occupancy grid maps (OGM) to be used for Advance d Driving Assistance Systems (ADAS) and Collision Avo idance Systems (CAS). Firstly, we introduce an enhanced in verse sensor model tailored for high-resolution laser scanners f or building OGM. It compensates the unreflected beams and deals with the ray casting to grid cells accuracy and computationa l effort problems. Secondly, we introduce the ‘vehicle on a circle for grid maps’ map alignment algorithm that allows building more accurate local maps by avoiding the computationally expensive inaccurate operations of image sub-pixel shifting a nd rotation. The resulted grid map is more convenient for ADAS f eatures than existing methods, as it allows using less memo ry sizes, and hence, results into a better real-time performance. Thirdly, we present an algorithm to detect what we call the ‘in-sight edges’. These edges guarantee modeling the free space area with a single polygon of a fixed number of vertices regardless th e driving situation and map complexity. The results from real world experiments show the effectiveness of our approach. Keywords— Occupancy Grid Map; Static Free Space Detection; Advanced Driving Assistance Systems; las er canner; autonomous driving",
"title": ""
},
{
"docid": "neg:1840459_2",
"text": "RUNX1 is a member of the core-binding factor family of transcription factors and is indispensable for the establishment of definitive hematopoiesis in vertebrates. RUNX1 is one of the most frequently mutated genes in a variety of hematological malignancies. Germ line mutations in RUNX1 cause familial platelet disorder with associated myeloid malignancies. Somatic mutations and chromosomal rearrangements involving RUNX1 are frequently observed in myelodysplastic syndrome and leukemias of myeloid and lymphoid lineages, that is, acute myeloid leukemia, acute lymphoblastic leukemia, and chronic myelomonocytic leukemia. More recent studies suggest that the wild-type RUNX1 is required for growth and survival of certain types of leukemia cells. The purpose of this review is to discuss the current status of our understanding about the role of RUNX1 in hematological malignancies.",
"title": ""
},
{
"docid": "neg:1840459_3",
"text": "The increased accessibility of digitally sourced data and advance technology to analyse it drives many industries to digital change. Many global businesses are talking about the potential of big data and they believe that analysing big data sets can help businesses derive competitive insight and shape organisations’ marketing strategy decisions. Potential impact of digital technology varies widely by industry. Sectors such as financial services, insurances and mobile telecommunications which are offering virtual rather than physical products are more likely highly susceptible to digital transformation. Howeverthe interaction between digital technology and organisations is complex and there are many barriers for to effective digital change which are presented by big data. Changes brought by technology challenges both researchers and practitioners. Various global business and digital tends have highlights the emergent need for collaboration between academia and market practitioners. There are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. In this paper we identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. INTRODUCTION Advances in digital technology has made a significant impact on marketing theory and practice. Technology expands the opportunity to capture better quality customer data, increase focus on customer relationship, rise of customer insight and Customer Relationship Management (CRM). Availability of big data made traditional marketing tools to work more powerful and innovative way. In current digital age of marketing some predictions of effects of the digital changes have come to function but still there is no definite answer to what works and what doesn’t in terms of implementing the changes in an organisation context. The choice of this specific topic is motivated by the need for a better understanding for impact of digital on marketing fild.This paper will discusses the potential positive impact of the big data on digital marketing. It also present the evidence of positive views in academia and highlight the gap between academia and practices. The main focus is on understanding the gap and providing recommendation for fillingit in. The aim of this paper is to identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results presented here show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. In our discussion we shall identify these industries and present evaluations of which industry sectors would need to be looking at understanding of impact that big data may have on their practices and businesses. Digital Marketing and Big data In early 90’s when views about digital changes has started Parsons at el (1998) believed that to achieve success in digital marketing consumer marketers should create a new model with five essential elements in new media environment. Figure below shows five success factors and issues that marketers should address around it. Figure 1. Digital marketing Framework and levers Parson et al (1998) International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 147 Today in digital age of marketing some predictions of effects of this changes have come to function but still there is no define answers on what works and what doesn’t in terms of implement it in organisation context.S. Dibb (2012). There are deferent explanations, arguments and views about impact of digital on marketing strategy in the literature. At first, it is important to define what is meant by digital marketing, what are the challenges brought by it and then understand how it is adopted. Simply, Digital Marketing (2012) can be defined as “a sub branch of traditional Marketing using modern digital channels for the placement of products such as downloadable music, and primarily for communicating with stakeholders e.g. customers and investors about brand, products and business progress”. According to (Smith, 2007) the digital marketing refers “The use of digital technologies to create an integrated, targeted and measurable communication which helps to acquire and retain customers while building deeper relationships with them”. There are a number of accepted theoretical frameworks however as Parsons et al (1998) suggested potentialities offered by digital marketing need to consider carefully where and how to build in each organisation by the senior managers. The most recent developments in this area has been triggered by growing amount of digital data now known as Big Data. Tech American Foundation (2004) defines Big Data as a “term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture storage, distribution, management and analysis of information”. D. Krajicek (2013) argues that the big challenge of Big Data is the ability to focus on what is meaningful not on what is possible, with so much information at their fingerprint marketers and their research partners can and often do fall into “more is better” fallacy. Knowing something and knowing it quickly is not enough. Therefore to have valuable Big data it needs to be sorted by professional people who have skills to understand dynamics of market and can identify what is relevant and meaningful. G. Day (2011). Data should be used for achieve competitive advantage by creating effective relationship with the target segments. According to K. Kendall (2014) with de right capabilities, you can take a whole range of new data sources such as web browsing, social data and geotracking data and develop much more complete profile about your customers and then with this information you can segment better. Successful Big Data initiatives should start with a specific and clearly defined business requirement then leaders of these initiatives need to assess the technical requirement and identify gap in their capabilities and then plan the investment to close those gaps (Big Data Analytics 2014) The impact and current challenges Bileviciene (2012) suggest that well conducted market research is the basis for successful marketing and well conducted study is the basis of successful market segmentation. Generally marketing management is broken down into a series of steps, which include market research, segmentation of markets and positioning the company’s offering in such a way as to appeal to the targeted segments. (OU Business school, 2007) Market segmentation refers to the process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the targeted segment (Business dictation, 2013). The goal for segmentation is to break down the target market into different consumers groups. According to Kotler and Armstrong (2011) traditionally customers were classified based on four types of segmentation variables, geographic, demographic, psychographic and behavioural. There are many focuses, beliefs and arguments in the field of market segmentation. Many researchers believe that the traditional variables of demographic and geographic segments are out-dated and the theory regarding segmentation has become too narrow (Quinn and Dibb, 2010). According to Lin (2002), these variables should be a part of a new, expanded view of the market segmentation theory that focuses more on customer’s personalities and values. Dibb and Simkin (2009) argue that priorities of market segmentation research aim to exploring the applicability of new segmentation bases across different products and contexts, developing more flexible data analysis techniques, creating new research designs and data collection approaches, however practical questions about implementation and integration have received less attention. According to S. Dibb (2012) in academic perspective segmentation still has strategic and tactical role as shown on figure below. But in practice as Dibb argues “some things have not changed” and: Segmentation’s strategic role still matters Implementation is as much of a pain as always Even the smartest segments need embedding International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 148 Figure 2: role of segmentation S. Dibb (2012) Dilemmas with the Implementation of digital change arise for various reasons. Some academics believed that greater access to data would reduce the need for more traditional segmentation but research done on the field shows that traditional segmentation works equal to CRM ( W. Boulding et al 2005). Even thought the marketing literature offers insights for improving the effectiveness of digital changes in marketing filed there is limitation on how an organisation adapts its customer information processes once the technology is adjusted into the organisation. (J. Peltier et al 2012) suggest that there is an urgent need for data management studies that captures insights from other disciplines including organisational behaviour, change management and technology implementation. Reibstein et al (2009) also highlights the emergent need for collaboration between academia and market practitioners. They point out that there is a “digital skill gap” within the marketing filed. Authors argue that there are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. Changes brought by technology and availability of di",
"title": ""
},
{
"docid": "neg:1840459_4",
"text": "Many experimental studies indicate that people are motivated by reciprocity. Rabin [Amer. Rev. 83 (1993) 1281] develops techniques for incorporating such concerns into game theo economics. His theory is developed for normal form games, and he abstracts from information the sequential structure of a strategic situation. We develop a theory of reciprocity for ext games in which the sequential structure of a strategic situation is made explicit, and propose solution concept—sequential reciprocity equilibrium—for which we prove an equilibrium exis result. The model is applied in several examples, and it is shown that it captures very well the in meaning of reciprocity as well as certain qualitative features of experimental evidence. 2003 Elsevier Inc. All rights reserved. JEL classification: A13; C70; D63",
"title": ""
},
{
"docid": "neg:1840459_5",
"text": "This paper presents a mutual capacitive touch screen panel (TSP) readout IC (ROIC) with a differential continuousmode parallel operation architecture (DCPA). The proposed architecture achieves a high product of signal-to-noise ratio (SNR) and frame rate, which is a requirement of ROIC for large-sized TSP. DCPA is accomplished by using the proposed differential sensing method with a parallel architecture in a continuousmode. This architecture is implemented using a continuous-type transmitter for parallel signaling and a differential-architecture receiver. A continuous-type differential charge amplifier removes the common-mode noise component, and reduces the self-noise by the band-pass filtering effect of the continuous-mode charge amplifier. In addition, the differential parallel architecture cancels the timing skew problem caused by the continuous-mode parallel operation and effectively enhances the power spectrum density of the signal. The proposed ROIC was fabricated using a 0.18-μm CMOS process and occupied an active area of 1.25 mm2. The proposed system achieved a 72 dB SNR and 240 Hz frame rate with a 32 channel TX by 10 channel RX mutual capacitive TSP. Moreover, the proposed differential-parallel architecture demonstrated higher immunity to lamp noise and display noise. The proposed system consumed 42.5 mW with a 3.3-V supply.",
"title": ""
},
{
"docid": "neg:1840459_6",
"text": "The aim of this paper is to present a review of recently used current control techniques for three-phase voltagesource pulsewidth modulated converters. Various techniques, different in concept, have been described in two main groups: linear and nonlinear. The first includes proportional integral stationary and synchronous) and state feedback controllers, and predictive techniques with constant switching frequency. The second comprises bang-bang (hysteresis, delta modulation) controllers and predictive controllers with on-line optimization. New trends in the current control—neural networks and fuzzy-logicbased controllers—are discussed, as well. Selected oscillograms accompany the presentation in order to illustrate properties of the described controller groups.",
"title": ""
},
{
"docid": "neg:1840459_7",
"text": "The aim of the present research is to study the rel ationship between “internet addiction” and “meta-co gnitive skills” with “academic achievement” in students of Islamic Azad University, Hamedan branch. This is de criptive – correlational method is used. To measure meta-cogni tive skills and internet addiction of students Well s questionnaire and Young questionnaire are used resp ectively. The population of the study is students o f Islamic Azad University of Hamedan. Using proportional stra tified random sampling the sample size was 375 stud ents. The results of the study showed that there is no signif icant relationship between two variables of “meta-c ognition” and “Internet addiction”(P >0.184).However, there is a significant relationship at 5% level between the tw o variables \"meta-cognition\" and \"academic achievement\" (P<0.00 2). Also, a significant inverse relationship was ob served between the average of two variables of \"Internet a ddiction\" and \"academic achievement\" at 5% level (P <0.031). There is a significant difference in terms of metacognition among the groups of different fields of s tudies. Furthermore, there is a significant difference in t erms of internet addiction scores among students be longing to different field of studies. In explaining the acade mic achievement variable variance of “meta-cognitio ” and “Internet addiction” using combined regression, it was observed that the above mentioned variables exp lain 16% of variable variance of academic achievement simultane ously.",
"title": ""
},
{
"docid": "neg:1840459_8",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "neg:1840459_9",
"text": "This study developed an integrated model to explore the antecedents and consequences of online word-of-mouth in the context of music-related communication. Based on survey data from college students, online word-of-mouth was measured with two components: online opinion leadership and online opinion seeking. The results identified innovativeness, Internet usage, and Internet social connection as significant predictors of online word-of-mouth, and online forwarding and online chatting as behavioral consequences of online word-of-mouth. Contrary to the original hypothesis, music involvement was found not to be significantly related to online word-of-mouth. Theoretical implications of the findings and future research directions are discussed.",
"title": ""
},
{
"docid": "neg:1840459_10",
"text": "A non-linear poroelastic finite element model of the lumbar spine was developed to investigate spinal response during daily dynamic physiological activities. Swelling was simulated by imposing a boundary pore pressure of 0.25 MPa at all external surfaces. Partial saturation of the disc was introduced to circumvent the negative pressures otherwise computed upon unloading. The loading conditions represented a pre-conditioning full day followed by another day of loading: 8h rest under a constant compressive load of 350 N, followed by 16 h loading phase under constant or cyclic compressive load varying in between 1000 and 1600 N. In addition, the effect of one or two short resting periods in the latter loading phase was studied. The model yielded fairly good agreement with in-vivo and in-vitro measurements. Taking the partial saturation of the disc into account, no negative pore pressures were generated during unloading and recovery phase. Recovery phase was faster than the loading period with equilibrium reached in only approximately 3h. With time and during the day, the axial displacement, fluid loss, axial stress and disc radial strain increased whereas the pore pressure and disc collagen fiber strains decreased. The fluid pressurization and collagen fiber stiffening were noticeable early in the morning, which gave way to greater compression stresses and radial strains in the annulus bulk as time went by. The rest periods dampened foregoing differences between the early morning and late in the afternoon periods. The forgoing diurnal variations have profound effects on lumbar spine biomechanics and risk of injury.",
"title": ""
},
{
"docid": "neg:1840459_11",
"text": "A new technology evaluation of fingerprint verification algorithms has been organized following the approach of the previous FVC2000 and FVC2002 evaluations, with the aim of tracking the quickly evolving state-ofthe-art of fingerprint recognition systems. Three sensors have been used for data collection, including a solid state sweeping sensor, and two optical sensors of different characteristics. The competition included a new category dedicated to “ light” systems, characterized by limited computational and storage resources. This paper summarizes the main activities of the FVC2004 organization and provides a first overview of the evaluation. Results will be further elaborated and officially presented at the International Conference on Biometric Authentication (Hong Kong) on July 2004.",
"title": ""
},
{
"docid": "neg:1840459_12",
"text": "The cultivation of pepper has great importance in all regions of Brazil, due to its characteristics of profi tability, especially when the producer and processing industry add value to the product, or its social importance because it employs large numbers of skilled labor. Peppers require monthly temperatures ranging between 21 and 30 °C, with an average of 18 °C. At low temperatures, there is a decrease in germination, wilting of young parts, and slow growth. Plants require adequate level of nitrogen, favoring plants and fruit growth. Most the cultivars require large spacing for adequate growth due to the canopy of the plants. Proper insect, disease, and weed control prolong the harvest of fruits for longer periods, reducing losses. The crop cycle and harvest period are directly affected by weather conditions, incidence of pests and diseases, and cultural practices including adequate fertilization, irrigation, and adoption of phytosanitary control measures. In general for most cultivars, the fi rst harvest starts 90 days after sowing, which can be prolonged for a couple of months depending on the plant physiological condition.",
"title": ""
},
{
"docid": "neg:1840459_13",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "neg:1840459_14",
"text": "OBJECTIVE\nThe Psychosocial Assessment Tool (PAT) was developed to screen for psychosocial risk in families of a child diagnosed with cancer. The current study is the first describing the cross-cultural adaptation, reliability, validity, and usability of the PAT in an European country (Dutch translation).\n\n\nMETHODS\nA total of 117 families (response rate 59%) of newly diagnosed children with cancer completed the PAT2.0 and validation measures.\n\n\nRESULTS\nAcceptable reliability was obtained for the PAT total score (α = .72) and majority of subscales (0.50-0.82). Two subscales showed inadequate internal consistency (Social Support α = .19; Family Beliefs α = .20). Validity and usability were adequate. Of the families, 66% scored low (Universal), 29% medium (Targeted), and 5% high (Clinical) risk.\n\n\nCONCLUSIONS\nThis study confirms the cross-cultural applicability, reliability, and validity of the PAT total score. Reliability left room for improvement on subscale level. Future research should indicate whether the PAT can be used to provide cost-effective care.",
"title": ""
},
{
"docid": "neg:1840459_15",
"text": "Ascariasis, a worldwide parasitic disease, is regarded by some authorities as the most common parasitic infection in humans. The causative organism is Ascaris lumbricoides, which normally lives in the lumen of the small intestine. From the intestine, the worm can invade the bile duct or pancreatic duct, but invasion into the gallbladder is quite rare because of the anatomical features of the cystic duct, which is narrow and tortuous. Once it enters the gallbladder, it is exceedingly rare for the worm to migrate back to the intestine. We report a case of gallbladder ascariasis with worm migration back into the intestine, in view of its rare presentation.",
"title": ""
},
{
"docid": "neg:1840459_16",
"text": "In this work, we tackle the problem of crowd counting in images. We present a Convolutional Neural Network (CNN) based density estimation approach to solve this problem. Predicting a high resolution density map in one go is a challenging task. Hence, we present a two branch CNN architecture for generating high resolution density maps, where the first branch generates a low resolution density map, and the second branch incorporates the low resolution prediction and feature maps from the first branch to generate a high resolution density map. We also propose a multi-stage extension of our approach where each stage in the pipeline utilizes the predictions from all the previous stages. Empirical comparison with the previous state-of-the-art crowd counting methods shows that our method achieves the lowest mean absolute error on three challenging crowd counting benchmarks: Shanghaitech, WorldExpo’10, and UCF datasets.",
"title": ""
},
{
"docid": "neg:1840459_17",
"text": "A 60 GHz frequency band planar diplexer based on Substrate Integrated Waveguide (SIW) technology is presented in this research. The 5th order millimeter wave SIW filter is investigated first, and then the 60 GHz SIW diplexer is designed and been simulated. SIW-microstrip transitions are also included in the final design. The relative bandwidths of up and down channels are 1.67% and 1.6% at 59.8 GHz and 62.2 GHz respectively. Simulation shows good channel isolation, small return losses and moderate insertion losses in pass bands. The diplexer can be easily integrated in millimeter wave integrated circuits.",
"title": ""
}
] |
1840460 | Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning | [
{
"docid": "pos:1840460_0",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
}
] | [
{
"docid": "neg:1840460_0",
"text": "We consider a multilingual weakly supervised learning scenario where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. Past approaches project labels across bitext and use them as features or gold labels for training. We propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language boundaries. We encode expectations as constraints and train a discriminative CRF model using Generalized Expectation Criteria (Mann and McCallum, 2010). Evaluated on standard Chinese-English and German-English NER datasets, our method demonstrates F1 scores of 64% and 60% when no labeled data is used. Attaining the same accuracy with supervised CRFs requires 12k and 1.5k labeled sentences. Furthermore, when combined with labeled examples, our method yields significant improvements over state-of-the-art supervised methods, achieving best reported numbers to date on Chinese OntoNotes and German CoNLL-03 datasets.",
"title": ""
},
{
"docid": "neg:1840460_1",
"text": "Nowadays, FMCW (Frequency Modulated Continuous Wave) radar is widely adapted due to the use of solid state microwave amplifier to generate signal source. The FMCW radar can be implemented and analyzed at low cost and less complexity by using Software Defined Radio (SDR). In this paper, SDR based FMCW radar for target detection and air traffic control radar application is implemented in real time. The FMCW radar model is implemented using open source software and hardware. GNU Radio is utilized for software part of the radar and USRP (Universal Software Radio Peripheral) N210 for hardware part. Log-periodic antenna operating at 1GHZ frequency is used for transmission and reception of radar signals. From the beat signal obtained at receiver end and range resolution of signal, target is detected. Further low pass filtering followed by Fast Fourier Transform (FFT) is performed to reduce computational complexity.",
"title": ""
},
{
"docid": "neg:1840460_2",
"text": "Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. The proposed approach tries to identify segments that represent argument elements (claims and premises) on social Web texts (mainly news and blogs) in the Greek language, for a small set of thematic domains, including articles on politics, economics, culture, various social issues, and sports. The proposed approach exploits distributed representations of words, extracted from a large non-annotated corpus. Among the novel aspects of this work is the thematic domain itself which relates to social Web, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social web communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. In addition, a new manually annotated corpus has been constructed that can be used freely for research purposes. Evaluation results are quite promising, suggesting that distributed representations can contribute positively to the task of argument extraction.",
"title": ""
},
{
"docid": "neg:1840460_3",
"text": "This paper presents an Iterative Linear Quadratic Regulator (ILQR) me thod for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-s ke etal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reach ing movements. Numerical comparisons with three existing methods demonstrate that the new method converge s substantially faster and finds slightly better solutions.",
"title": ""
},
{
"docid": "neg:1840460_4",
"text": "General unsupervised learning is a long-standing conceptual problem in machine learning. Supervised learning is successful because it can be solved by the minimization of the training error cost function. Unsupervised learning is not as successful, because the unsupervised objective may be unrelated to the supervised task of interest. For an example, density modelling and reconstruction have often been used for unsupervised learning, but they did not produced the sought-after performance gains, because they have no knowledge of the sought-after supervised tasks. In this paper, we present an unsupervised cost function which we name the Output Distribution Matching (ODM) cost, which measures a divergence between the distribution of predictions and distributions of labels. The ODM cost is appealing because it is consistent with the supervised cost in the following sense: a perfect supervised classifier is also perfect according to the ODM cost. Therefore, by aggressively optimizing the ODM cost, we are almost guaranteed to improve our supervised performance whenever the space of possible predictions is exponentially large. We demonstrate that the ODM cost works well on number of small and semiartificial datasets using no (or almost no) labelled training cases. Finally, we show that the ODM cost can be used for one-shot domain adaptation, which allows the model to classify inputs that differ from the input distribution in significant ways without the need for prior exposure to the new domain.",
"title": ""
},
{
"docid": "neg:1840460_5",
"text": "This paper presents the development of automatic vehicle plate detection system using image processing technique. The famous name for this system is Automatic Number Plate Recognition (ANPR). Automatic vehicle plate detection system is commonly used in field of safety and security systems especially in car parking area. Beside the safety aspect, this system is applied to monitor road traffic such as the speed of vehicle and identification of the vehicle's owner. This system is designed to assist the authorities in identifying the stolen vehicle not only for car but motorcycle as well. In this system, the Optical Character Recognition (OCR) technique was the prominent technique employed by researchers to analyse image of vehicle plate. The limitation of this technique was the incapability of the technique to convert text or data accurately. Besides, the characters, the background and the size of the vehicle plate are varied from one country to other country. Hence, this project proposes a combination of image processing technique and OCR to obtain the accurate vehicle plate recognition for vehicle in Malaysia. The outcome of this study is the system capable to detect characters and numbers of vehicle plate in different backgrounds (black and white) accurately. This study also involves the development of Graphical User Interface (GUI) to ease user in recognizing the characters and numbers in the vehicle or license plates.",
"title": ""
},
{
"docid": "neg:1840460_6",
"text": "In this paper, we propose a novel end-to-end neural architecture for ranking candidate answers, that adapts a hierarchical recurrent neural network and a latent topic clustering module. With our proposed model, a text is encoded to a vector representation from an wordlevel to a chunk-level to effectively capture the entire meaning. In particular, by adapting the hierarchical structure, our model shows very small performance degradations in longer text comprehension while other state-of-the-art recurrent neural network models suffer from it. Additionally, the latent topic clustering module extracts semantic information from target samples. This clustering module is useful for any text related tasks by allowing each data sample to find its nearest topic cluster, thus helping the neural network model analyze the entire data. We evaluate our models on the Ubuntu Dialogue Corpus and consumer electronic domain question answering dataset, which is related to Samsung products. The proposed model shows state-of-the-art results for ranking question-answer pairs.",
"title": ""
},
{
"docid": "neg:1840460_7",
"text": "Progress in both speech and language processing has spurred efforts to support applications that rely on spoken rather than written language input. A key challenge in moving from text-based documents to such spoken documents is that spoken language lacks explicit punctuation and formatting, which can be crucial for good performance. This article describes different levels of speech segmentation, approaches to automatically recovering segment boundary locations, and experimental results demonstrating impact on several language processing tasks. The results also show a need for optimizing segmentation for the end task rather than independently.",
"title": ""
},
{
"docid": "neg:1840460_8",
"text": "Users derive many benefits by storing personal data in cloud computing services; however the drawback of storing data in these services is that the user cannot access his/her own data when an internet connection is not available. To solve this problem in an efficient and elegant way, we propose the cloud-dew architecture. Cloud-dew architecture is an extension of the client-server architecture. In the extension, servers are further classified into cloud servers and dew servers. The dew servers are web servers that reside on user’s local computers and have a pluggable structure so that scripts and databases of websites can be installed easily. The cloud-dew architecture not only makes the personal data stored in the cloud continuously accessible by the user, but also enables a new application: web-surfing without an internet connection. An experimental system is presented to demonstrate the ideas of the cloud-dew architecture.",
"title": ""
},
{
"docid": "neg:1840460_9",
"text": "Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by ∼7%. Generated summaries are less redundant and more coherent based upon manual quality evaluations.",
"title": ""
},
{
"docid": "neg:1840460_10",
"text": "Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior.\n The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.",
"title": ""
},
{
"docid": "neg:1840460_11",
"text": "Existing automated essay scoring (AES) models rely on rated essays for the target prompt as training data. Despite their successes in prompt-dependent AES, how to effectively predict essay ratings under a prompt-independent setting remains a challenge, where the rated essays for the target prompt are not available. To close this gap, a two-stage deep neural network (TDNN) is proposed. In particular, in the first stage, using the rated essays for nontarget prompts as the training data, a shallow model is learned to select essays with an extreme quality for the target prompt, serving as pseudo training data; in the second stage, an end-to-end hybrid deep model is proposed to learn a prompt-dependent rating model consuming the pseudo training data from the first step. Evaluation of the proposed TDNN on the standard ASAP dataset demonstrates a promising improvement for the prompt-independent AES task.",
"title": ""
},
{
"docid": "neg:1840460_12",
"text": "Ensembles of learning machines are promising for software effort estimation (SEE), but need to be tailored for this task to have their potential exploited. A key issue when creating ensembles is to produce diverse and accurate base models. Depending on how differently different performance measures behave for SEE, they could be used as a natural way of creating SEE ensembles. We propose to view SEE model creation as a multiobjective learning problem. A multiobjective evolutionary algorithm (MOEA) is used to better understand the tradeoff among different performance measures by creating SEE models through the simultaneous optimisation of these measures. We show that the performance measures behave very differently, presenting sometimes even opposite trends. They are then used as a source of diversity for creating SEE ensembles. A good tradeoff among different measures can be obtained by using an ensemble of MOEA solutions. This ensemble performs similarly or better than a model that does not consider these measures explicitly. Besides, MOEA is also flexible, allowing emphasis of a particular measure if desired. In conclusion, MOEA can be used to better understand the relationship among performance measures and has shown to be very effective in creating SEE models.",
"title": ""
},
{
"docid": "neg:1840460_13",
"text": "This paper presents the issue of a nonharmonic multitone generation with the use of singing bowls and the digital signal processors. The authors show the possibility of therapeutic applications of such multitone signals. Some known methods of the digital generation of the tone signal with the additional modulation are evaluated. Two projects of the very precise multitone generators are presented. In described generators, the digital signal processors synthesize the signal, while the additional microcontrollers realize the operator's interface. As a final result, the sound of the original singing bowls is confronted with the sound synthesized by one of the generators.",
"title": ""
},
{
"docid": "neg:1840460_14",
"text": "Spaceborne synthetic aperture radar systems are severely constrained to a narrow swath by ambiguity limitations. Here a vertically scanned-beam synthetic aperture system (SCANSAR) is proposed as a solution to this problem. The potential length of synthetic aperture must be shared between beam positions, so the along-track resolution is poorer; a direct tradeoff exists between resolution and swath width. The length of the real aperture is independently traded against the number of scanning positions. Design curves and equations are presented for spaceborne SCANSARs for altitudes between 400 and 1400 km and inner angles of incidence between 20° and 40°. When the real antenna is approximately square, it may also be used for a microwave radiometer. The combined radiometer and synthetic-aperture (RADISAR) should be useful for those applications where the poorer resolution of the radiometer is useful for some purposes, but the finer resolution of the radar is needed for others.",
"title": ""
},
{
"docid": "neg:1840460_15",
"text": "A method for requirements analysis is proposed that accounts for individual and personal goals, and the effect of time and context on personal requirements. First a framework to analyse the issues inherent in requirements that change over time and location is proposed. The implications of the framework on system architecture are considered as three implementation pathways: functional specifications, development of customisable features and automatic adaptation by the system. These pathways imply the need to analyse system architecture requirements. A scenario-based analysis method is described for specifying requirements goals and their potential change. The method addresses goal setting for measurement and monitoring, and conflict resolution when requirements at different layers (group, individual) and from different sources (personal, advice from an external authority) conflict. The method links requirements analysis to design by modelling alternative solution pathways. Different implementation pathways have cost–benefit implications for stakeholders, so cost–benefit analysis techniques are proposed to assess trade-offs between goals and implementation strategies. The use of the framework is illustrated with two case studies in assistive technology domains: e-mail and a personalised navigation system. The first case study illustrates personal requirements to help cognitively disabled users communicate via e-mail, while the second addresses personal and mobile requirements to help disabled users make journeys on their own, assisted by a mobile PDA guide. In both case studies the experience from requirements analysis to implementation, requirements monitoring, and requirements evolution is reported.",
"title": ""
},
{
"docid": "neg:1840460_16",
"text": "Analyzing the security of Wearable Internet-of-Things (WIoT) devices is considered a complex task due to their heterogeneous nature. In addition, there is currently no mechanism that performs security testing for WIoT devices in different contexts. In this article, we propose an innovative security testbed framework targeted at wearable devices, where a set of security tests are conducted, and a dynamic analysis is performed by realistically simulating environmental conditions in which WIoT devices operate. The architectural design of the proposed testbed and a proof-of-concept, demonstrating a preliminary analysis and the detection of context-based attacks executed by smartwatch devices, are presented.",
"title": ""
},
{
"docid": "neg:1840460_17",
"text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.",
"title": ""
},
{
"docid": "neg:1840460_18",
"text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.",
"title": ""
}
] |
1840461 | One-step and Two-step Classification for Abusive Language Detection on Twitter | [
{
"docid": "pos:1840461_0",
"text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.",
"title": ""
}
] | [
{
"docid": "neg:1840461_0",
"text": "This decade sees a growing number of applications of Unmanned Aerial Vehicles (UAVs) or drones. UAVs are now being experimented for commercial applications in public areas as well as used in private environments such as in farming. As such, the development of efficient communication protocols for UAVs is of much interest. This paper compares and contrasts recent communication protocols of UAVs with that of Vehicular Ad Hoc Networks (VANETs) using Wireless Access in Vehicular Environments (WAVE) protocol stack as the reference model. The paper also identifies the importance of developing light-weight communication protocols for certain applications of UAVs as they can be both of low processing power and limited battery energy.",
"title": ""
},
{
"docid": "neg:1840461_1",
"text": "Does the brain of a bilingual process language differently from that of a monolingual? We compared how bilinguals and monolinguals recruit classic language brain areas in response to a language task and asked whether there is a neural signature of bilingualism. Highly proficient and early-exposed adult Spanish-English bilinguals and English monolinguals participated. During functional magnetic resonance imaging (fMRI), participants completed a syntactic sentence judgment task [Caplan, D., Alpert, N., & Waters, G. Effects of syntactic structure and propositional number on patterns of regional cerebral blood flow. Journal of Cognitive Neuroscience, 10, 541552, 1998]. The sentences exploited differences between Spanish and English linguistic properties, allowing us to explore similarities and differences in behavioral and neural responses between bilinguals and monolinguals, and between a bilingual's two languages. If bilinguals' neural processing differs across their two languages, then differential behavioral and neural patterns should be observed in Spanish and English. Results show that behaviorally, in English, bilinguals and monolinguals had the same speed and accuracy, yet, as predicted from the Spanish-English structural differences, bilinguals had a different pattern of performance in Spanish. fMRI analyses revealed that both monolinguals (in one language) and bilinguals (in each language) showed predicted increases in activation in classic language areas (e.g., left inferior frontal cortex, LIFC), with any neural differences between the bilingual's two languages being principled and predictable based on the morphosyntactic differences between Spanish and English. However, an important difference was that bilinguals had a significantly greater increase in the blood oxygenation level-dependent signal in the LIFC (BA 45) when processing English than the English monolinguals. The results provide insight into the decades-old question about the degree of separation of bilinguals' dual-language representation. The differential activation for bilinguals and monolinguals opens the question as to whether there may possibly be a neural signature of bilingualism. Differential activation may further provide a fascinating window into the language processing potential not recruited in monolingual brains and reveal the biological extent of the neural architecture underlying all human language.",
"title": ""
},
{
"docid": "neg:1840461_2",
"text": "Every year, novel NVIDIA GPU designs are introduced. This rapid architectural and technological progression, coupled with a reluctance by manufacturers to disclose low-level details, makes it difficult for even the most proficient GPU software designers to remain up-to-date with the technological advances at a microarchitectural level. To address this dearth of public, microarchitectural-level information on the novel NVIDIA GPUs, independent researchers have resorted to microbenchmarks-based dissection and discovery. This has led to a prolific line of publications that shed light on instruction encoding, and memory hierarchy's geometry and features at each level. Namely, research that describes the performance and behavior of the Kepler, Maxwell and Pascal architectures. In this technical report, we continue this line of research by presenting the microarchitectural details of the NVIDIA Volta architecture, discovered through microbenchmarks and instruction set disassembly. Additionally, we compare quantitatively our Volta findings against its predecessors, Kepler, Maxwell and Pascal.",
"title": ""
},
{
"docid": "neg:1840461_3",
"text": "A succinct overview of some of the major research approaches to the study of leadership is provided as a foundation for the introduction of a multicomponent model of leadership that draws on those findings, complexity theory, and the concept of emergence. The major aspects of the model include: the personal characteristics and capacities, thoughts, feelings, behaviors, and human working relationships of leaders, followers, and other stake holders, the organization’s systems, including structures, processes, contents, and internal situations, the organization’s performance and outcomes, and the external environment(s), ecological niches, and external situations in which an enterprise functions. The relationship between this model and other approaches in the literature as well as directions in research on leadership and implications for consulting practice are discussed.",
"title": ""
},
{
"docid": "neg:1840461_4",
"text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.",
"title": ""
},
{
"docid": "neg:1840461_5",
"text": "OBJECTIVE\nThis study aimed to compare mental health, quality of life, empathy, and burnout in medical students from a medical institution in the USA and another one in Brazil.\n\n\nMETHODS\nThis cross-cultural study included students enrolled in the first and second years of their undergraduate medical training. We evaluated depression, anxiety, and stress (DASS 21), empathy, openness to spirituality, and wellness (ESWIM), burnout (Oldenburg), and quality of life (WHOQOL-Bref) and compared them between schools.\n\n\nRESULTS\nA total of 138 Brazilian and 73 US medical students were included. The comparison between all US medical students and all Brazilian medical students revealed that Brazilians reported more depression and stress and US students reported greater wellness, less exhaustion, and greater environmental quality of life. In order to address a possible response bias favoring respondents with better mental health, we also compared all US medical students with the 50% of Brazilian medical students who reported better mental health. In this comparison, we found Brazilian medical students had higher physical quality of life and US students again reported greater environmental quality of life. Cultural, social, infrastructural, and curricular differences were compared between institutions. Some noted differences were that students at the US institution were older and were exposed to smaller class sizes, earlier patient encounters, problem-based learning, and psychological support.\n\n\nCONCLUSION\nWe found important differences between Brazilian and US medical students, particularly in mental health and wellness. These findings could be explained by a complex interaction between several factors, highlighting the importance of considering cultural and school-level influences on well-being.",
"title": ""
},
{
"docid": "neg:1840461_6",
"text": "This paper describes a method of implementing two factor authentication using mobile phones. The proposed method guarantees that authenticating to services, such as online banking or ATM machines, is done in a very secure manner. The proposed system involves using a mobile phone as a software token for One Time Password generation. The generated One Time Password is valid for only a short user-defined period of time and is generated by factors that are unique to both, the user and the mobile device itself. Additionally, an SMS-based mechanism is implemented as both a backup mechanism for retrieving the password and as a possible mean of synchronization. The proposed method has been implemented and tested. Initial results show the success of the proposed method.",
"title": ""
},
{
"docid": "neg:1840461_7",
"text": "A number of important problems in theoretical computer science and machine learning can be interpreted as recovering a certain basis. These include certain tensor decompositions, Independent Component Analysis (ICA), spectral clustering and Gaussian mixture learning. Each of these problems reduces to an instance of our general model, which we call a “Basis Encoding Function” (BEF). We show that learning a basis within this model can then be provably and efficiently achieved using a first order iteration algorithm (gradient iteration). Our algorithm goes beyond tensor methods, providing a function-based generalization for a number of existing methods including the classical matrix power method, the tensor power iteration as well as cumulant-based FastICA. Our framework also unifies the unusual phenomenon observed in these domains that they can be solved using efficient non-convex optimization. Specifically, we describe a class of BEFs such that their local maxima on the unit sphere are in one-to-one correspondence with the basis elements. This description relies on a certain “hidden convexity” property of these functions. We provide a complete theoretical analysis of gradient iteration even when the BEF is perturbed. We show convergence and complexity bounds polynomial in dimension and other relevant parameters, such as perturbation size. Our perturbation results can be considered as a non-linear version of the classical Davis-Kahan theorem for perturbations of eigenvectors of symmetric matrices. In addition we show that our algorithm exhibits fast (superlinear) convergence and relate the speed of convergence to the properties of the BEF. Moreover, the gradient iteration algorithm can be easily and efficiently implemented in practice. Finally we apply our framework by providing the first provable algorithm for recovery in a general perturbed ICA model. ar X iv :1 41 1. 14 20 v3 [ cs .L G ] 3 N ov 2 01 5",
"title": ""
},
{
"docid": "neg:1840461_8",
"text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles",
"title": ""
},
{
"docid": "neg:1840461_9",
"text": "Automatic concept learning from large scale imbalanced data sets is a key issue in video semantic analysis and retrieval, which means the number of negative examples is far more than that of positive examples for each concept in the training data. The existing methods adopt generally under-sampling for the majority negative examples or over-sampling for the minority positive examples to balance the class distribution on training data. The main drawbacks of these methods are: (1) As a key factor that affects greatly the performance, in most existing methods, the degree of re-sampling needs to be pre-fixed, which is not generally the optimal choice; (2) Many useful negative samples may be discarded in under-sampling. In addition, some works only focus on the improvement of the computational speed, rather than the accuracy. To address the above issues, we propose a new approach and algorithm named AdaOUBoost (Adaptive Over-sampling and Under-sampling Boost). The novelty of AdaOUBoost mainly lies in: adaptively over-sample the minority positive examples and under-sample the majority negative examples to form different sub-classifiers. And combine these sub-classifiers according to their accuracy to create a strong classifier, which aims to use fully the whole training data and improve the performance of the class-imbalance learning classifier. In AdaOUBoost, first, our clustering-based under-sampling method is employed to divide the majority negative examples into some disjoint subsets. Then, for each subset of negative examples, we utilize the borderline-SMOTE (synthetic minority over-sampling technique) algorithm to over-sample the positive examples with different size, train each sub-classifier using each of them, and get the classifier by fusing these sub-classifiers with different weights. Finally, we combine these classifiers in each subset of negative examples to create a strong classifier. We compare the performance between AdaOUBoost and the state-of-the-art methods on TRECVID 2008 benchmark with all 20 concepts, and the results show the AdaOUBoost can achieve the superior performance in large scale imbalanced data sets.",
"title": ""
},
{
"docid": "neg:1840461_10",
"text": "Canonical correlation analysis (CCA) is a well established technique for identifying linear relationships among two variable sets. Kernel CCA (KCCA) is the most notable nonlinear extension but it lacks interpretability and robustness against irrelevant features. The aim of this article is to introduce two nonlinear CCA extensions that rely on the recently proposed Hilbert-Schmidt independence criterion and the centered kernel target alignment. These extensions determine linear projections that provide maximally dependent projected data pairs. The paper demonstrates that the use of linear projections allows removing irrelevant features, whilst extracting combinations of strongly associated features. This is exemplified through a simulation and the analysis of recorded data that are available in the literature.",
"title": ""
},
{
"docid": "neg:1840461_11",
"text": "In this technological world one of the general method for user to save their data is cloud. Most of the cloud storage company provides some storage space as free to its users. Both individuals and corporate are storing their files in the cloud infrastructure so it becomes a problem for a forensics analyst to perform evidence acquisition and examination. One reason that makes evidence acquisition more difficult is user data always saved in remote computer on cloud. Various cloud companies available in the market serving storage as one of their services and everyone delivering different kinds of features and facilities in the storage technology. One area of difficulty is the acquisition of evidential data associated to a cybercrime stored in a different cloud company service. Due to lack of understanding about the location of evidence data regarding which place it is saved could also affect an analytical process and it take a long time to speak with all cloud service companies to find whether data is saved within their cloud. By analyzing two cloud service companies (IDrive and Mega cloud drive) this study elaborates the various steps involved in the activity of obtaining evidence on a user account through a browser and then via cloud software application on a Windows 7 machine. This paper will detail findings for both the Mega cloud drive and IDrive client software, to find the different evidence that IDrive and the mega cloud drive leaves behind on a user computer. By establishing the artifacts on a user machine will give an overall idea regarding kind of evidence residue in user computer for investigators. Key evidences discovered on this investigation comprises of RAM memory captures, registry files application logs, file time and date values and browser artifacts are acquired from these two cloud companies on a user windows machine.",
"title": ""
},
{
"docid": "neg:1840461_12",
"text": "How to represent a map of the environment is a key question of robotics. In this paper, we focus on suggesting a representation well-suited for online map building from vision-based data and online planning in 3D. We propose to combine a commonly-used representation in computer graphics and surface reconstruction, projective Truncated Signed Distance Field (TSDF), with a representation frequently used for collision checking and collision costs in planning, Euclidean Signed Distance Field (ESDF), and validate this combined approach in simulation. We argue that this type of map is better-suited for robotic applications than existing representations.",
"title": ""
},
{
"docid": "neg:1840461_13",
"text": "FLLL 2 Preface This is a printed collection of the contents of the lecture \" Genetic Algorithms: Theory and Applications \" which I gave first in the winter semester 1999/2000 at the Johannes Kepler University in Linz. The reader should be aware that this manuscript is subject to further reconsideration and improvement. Corrections, complaints, and suggestions are cordially welcome. The sources were manifold: Chapters 1 and 2 were written originally for these lecture notes. All examples were implemented from scratch. The third chapter is a distillation of the books of Goldberg [13] and Hoffmann [15] and a handwritten manuscript of the preceding lecture on genetic algorithms which was given by Andreas Stöckl in 1993 at the Johannes Kepler University. Chapters 4, 5, and 7 contain recent adaptations of previously published material from my own master thesis and a series of lectures which was given by Francisco Herrera and myself at the Second Summer School on Advanced Control at the Slovak Technical University, Bratislava, in summer 1997 [4]. Chapter 6 was written originally, however, strongly influenced by A. Geyer-Schulz's works and H. Hörner's paper on his C++ GP kernel [18]. I would like to thank all the students attending the first GA lecture in Winter 1999/2000, for remaining loyal throughout the whole term and for contributing much to these lecture notes with their vivid, interesting, and stimulating questions, objections, and discussions. Last but not least, I want to express my sincere gratitude to Sabine Lumpi and Susanne Saminger for support in organizational matters, and Pe-ter Bauer for proofreading .",
"title": ""
},
{
"docid": "neg:1840461_14",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "neg:1840461_15",
"text": "This paper deals with prediction of anopheles number, the main vector of malaria risk, using environmental and climate variables. The variables selection is based on an automatic machine learning method using regression trees, and random forests combined with stratified two levels cross validation. The minimum threshold of variables importance is accessed using the quadratic distance of variables importance while the optimal subset of selected variables is used to perform predictions. Finally the results revealed to be qualitatively better, at the selection, the prediction, and the CPU time point of view than those obtained by GLM-Lasso method.",
"title": ""
},
{
"docid": "neg:1840461_16",
"text": "OBJECTIVE\nAlthough the prevalence of children with pervasive developmental disorders (PDD) has increased, empirical data about the role and practices of occupational therapists have not been reported in the literature. This descriptive study investigated the practice of occupational therapists with children with PDD.\n\n\nMETHOD\nA survey was mailed to 500 occupational therapists in the Sensory Integration Special Interest Section or School System Special Interest Section of the American Occupational Therapy Association in eastern and midwestern United States. The valid return rate was 58% (292 respondents). The survey used Likert scale items to measure frequency of performance problems observed in children with PDD, performance areas addressed in intervention, perceived improvement in performance, and frequency of use of and competency in intervention approaches.\n\n\nRESULTS\nThe respondents primarily worked in schools and reported that in the past 5 years they had served an increasing number of children with PDD. Most respondents provided direct services and appeared to use holistic approaches in which they addressed multiple performance domains. They applied sensory integration and environmental modification approaches most frequently and believed that they were most competent in using these approaches. Respondents who reported more frequent use of and more competence in sensory integration approaches perceived more improvement in children's sensory processing. Respondents who reported more frequent use of and more competence in child-centered play perceived more improvement in children's sensory integration and play skills.",
"title": ""
},
{
"docid": "neg:1840461_17",
"text": "With the increased global use of online media platforms, there are more opportunities than ever to misuse those platforms or perpetrate fraud. One such fraud is within the music industry, where perpetrators create automated programs, streaming songs to generate revenue or increase popularity of an artist. With growing annual revenue of the digital music industry, there are significant financial incentives for perpetrators with fraud in mind. The focus of the study is extracting user behavioral patterns and utilising them to train and compare multiple supervised classification method to detect fraud. The machine learning algorithms examined are Logistic Regression, Support Vector Machines, Random Forest and Artificial Neural Networks. The study compares performance of these algorithms trained on imbalanced datasets carrying different fractions of fraud. The trained models are evaluated using the Precision Recall Area Under the Curve (PR AUC) and a F1-score. Results show that the algorithms achieve similar performance when trained on balanced and imbalanced datasets. It also shows that Random Forest outperforms the other methods for all datasets tested in this experiment.",
"title": ""
},
{
"docid": "neg:1840461_18",
"text": "Many NLP applications require disambiguating polysemous words. Existing methods that learn polysemous word vector representations involve first detecting various senses and optimizing the sensespecific embeddings separately, which are invariably more involved than single sense learning methods such as word2vec. Evaluating these methods is also problematic, as rigorous quantitative evaluations in this space is limited, especially when compared with single-sense embeddings. In this paper, we propose a simple method to learn a word representation, given any context. Our method only requires learning the usual single sense representation, and coefficients that can be learnt via a single pass over the data. We propose several new test sets for evaluating word sense induction, relevance detection, and contextual word similarity, significantly supplementing the currently available tests. Results on these and other tests show that while our method is embarrassingly simple, it achieves excellent results when compared to the state of the art models for unsupervised polysemous word representation learning. Our code and data are at https://github.com/dingwc/",
"title": ""
},
{
"docid": "neg:1840461_19",
"text": "For an intelligent transportation system (ITS), traffic incident detection is one of the most important issues, especially for urban area which is full of signaled intersections. In this paper, we propose a novel traffic incident detection method based on the image signal processing and hidden Markov model (HMM) classifier. First, a traffic surveillance system was set up at a typical intersection of china, traffic videos were recorded and image sequences were extracted for image database forming. Second, compressed features were generated through several image processing steps, image difference with FFT was used to improve the recognition rate. Finally, HMM was used for classification of traffic signal logics (East-West, West-East, South-North, North-South) and accident of crash, the total correct rate is 74% and incident recognition rate is 84%. We believe, with more types of incident adding to the database, our detection algorithm could serve well for the traffic surveillance system.",
"title": ""
}
] |
1840462 | A Large-Displacement 3-DOF Flexure Parallel Mechanism with Decoupled Kinematics Structure | [
{
"docid": "pos:1840462_0",
"text": "A new two-degrees-of-freedom (2-DOF) compliant parallel micromanipulator (CPM) utilizing flexure joints has been proposed for two-dimensional (2-D) nanomanipulation in this paper. The system is developed by a careful design and proper selection of electrical and mechanical components. Based upon the developed PRB model, both the position and velocity kinematic modelings have been performed in details, and the CPM's workspace area is determined analytically in view of the physical constraints imposed by pizeo-actuators and flexure hinges. Moreover, in order to achieve a maximum workspace subjected to the given dexterity indices, kinematic optimization of the design parameters has been carried out, which leads to a manipulator satisfying the requirement of this work. Simulation results reveal that the designed CPM can perform a high dexterous manipulation within its workspace.",
"title": ""
}
] | [
{
"docid": "neg:1840462_0",
"text": "The decision tree output of Quinlan's ID3 algorithm is one of its major weaknesses. Not only can it be incomprehensible and difficult to manipulate, but its use in expert systems frequently demands irrelevant information to be supplied. This report argues that the problem lies in the induction algorithm itself and can only be remedied by radically altering the underlying strategy. It describes a new algorithm, PRISM which, although based on ID3, uses a different induction strategy to induce rules which are modular, thus avoiding many of the problems associated with decision trees.",
"title": ""
},
{
"docid": "neg:1840462_1",
"text": "Traumatic brain injury (TBI) triggers endoplasmic reticulum (ER) stress and impairs autophagic clearance of damaged organelles and toxic macromolecules. In this study, we investigated the effects of the post-TBI administration of docosahexaenoic acid (DHA) on improving hippocampal autophagy flux and cognitive functions of rats. TBI was induced by cortical contusion injury in Sprague–Dawley rats, which received DHA (16 mg/kg in DMSO, intraperitoneal administration) or vehicle DMSO (1 ml/kg) with an initial dose within 15 min after the injury, followed by a daily dose for 3 or 7 days. First, RT-qPCR reveals that TBI induced a significant elevation in expression of autophagy-related genes in the hippocampus, including SQSTM1/p62 (sequestosome 1), lysosomal-associated membrane proteins 1 and 2 (Lamp1 and Lamp2), and cathepsin D (Ctsd). Upregulation of the corresponding autophagy-related proteins was detected by immunoblotting and immunostaining. In contrast, the DHA-treated rats did not exhibit the TBI-induced autophagy biogenesis and showed restored CTSD protein expression and activity. T2-weighted images and diffusion tensor imaging (DTI) of ex vivo brains showed that DHA reduced both gray matter and white matter damages in cortical and hippocampal tissues. DHA-treated animals performed better than the vehicle control group on the Morris water maze test. Taken together, these findings suggest that TBI triggers sustained stimulation of autophagy biogenesis, autophagy flux, and lysosomal functions in the hippocampus. Swift post-injury DHA administration restores hippocampal lysosomal biogenesis and function, demonstrating its therapeutic potential.",
"title": ""
},
{
"docid": "neg:1840462_2",
"text": "The problem of defining and classifying power system stability has been addressed by several previous CIGRE and IEEE Task Force reports. These earlier efforts, however, do not completely reflect current industry needs, experiences and understanding. In particular, the definitions are not precise and the classifications do not encompass all practical instability scenarios. This report developed by a Task Force, set up jointly by the CIGRE Study Committee 38 and the IEEE Power System Dynamic Performance Committee, addresses the issue of stability definition and classification in power systems from a fundamental viewpoint and closely examines the practical ramifications. The report aims to define power system stability more precisely, provide a systematic basis for its classification, and discuss linkages to related issues such as power system reliability and security.",
"title": ""
},
{
"docid": "neg:1840462_3",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
},
{
"docid": "neg:1840462_4",
"text": "The present article presents a tutorial on how to estimate and interpret various effect sizes. The 5th edition of the Publication Manual of the American Psychological Association (2001) described the failure to report effect sizes as a “defect” (p. 5), and 23 journals have published author guidelines requiring effect size reporting. Although dozens of effect size statistics have been available for some time, many researchers were trained at a time when effect sizes were not emphasized, or perhaps even taught. Consequently, some readers may appreciate a review of how to estimate and interpret various effect sizes. In addition to the tutorial, the authors recommend effect size interpretations that emphasize direct and explicit comparisons of effects in a new study with those reported in the prior related literature, with a focus on evaluating result replicability.",
"title": ""
},
{
"docid": "neg:1840462_5",
"text": "Large-scale instance-level image retrieval aims at retrieving specific instances of objects or scenes. Simultaneously retrieving multiple objects in a test image adds to the difficulty of the problem, especially if the objects are visually similar. This paper presents an efficient approach for per-exemplar multi-label image classification, which targets the recognition and localization of products in retail store images. We achieve runtime efficiency through the use of discriminative random forests, deformable dense pixel matching and genetic algorithm optimization. Cross-dataset recognition is performed, where our training images are taken in ideal conditions with only one single training image per product label, while the evaluation set is taken using a mobile phone in real-life scenarios in completely different conditions. In addition, we provide a large novel dataset and labeling tools for products image search, to motivate further research efforts on multi-label retail products image classification. The proposed approach achieves promising results in terms of both accuracy and runtime efficiency on 680 annotated images of our dataset, and 885 test images of GroZi-120 dataset. We make our dataset of 8350 different product images and the 680 test images from retail stores with complete annotations available to the wider community.",
"title": ""
},
{
"docid": "neg:1840462_6",
"text": "From the point of view of a programmer, the robopsychology is a synonym for the activity is done by developers to implement their machine learning applications. This robopsychological approach raises some fundamental theoretical questions of machine learning. Our discussion of these questions is constrained to Turing machines. Alan Turing had given an algorithm (aka the Turing Machine) to describe algorithms. If it has been applied to describe itself then this brings us to Turing’s notion of the universal machine. In the present paper, we investigate algorithms to write algorithms. From a pedagogy point of view, this way of writing programs can be considered as a combination of learning by listening and learning by doing due to it is based on applying agent technology and machine learning. As the main result we introduce the problem of learning and then we show that it cannot easily be handled in reality therefore it is reasonable to use machine learning algorithm for learning Turing machines.",
"title": ""
},
{
"docid": "neg:1840462_7",
"text": "Existing neural relation extraction (NRE) models rely on distant supervision and suffer from wrong labeling problems. In this paper, we propose a novel adversarial training mechanism over instances for relation extraction to alleviate the noise issue. As compared with previous denoising methods, our proposed method can better discriminate those informative instances from noisy ones. Our method is also efficient and flexible to be applied to various NRE architectures. As shown in the experiments on a large-scale benchmark dataset in relation extraction, our denoising method can effectively filter out noisy instances and achieve significant improvements as compared with the state-of-theart models.",
"title": ""
},
{
"docid": "neg:1840462_8",
"text": "The cognitive neural prosthetic (CNP) is a very versatile method for assisting paralyzed patients and patients with amputations. The CNP records the cognitive state of the subject, rather than signals strictly related to motor execution or sensation. We review a number of high-level cortical signals and their application for CNPs, including intention, motor imagery, decision making, forward estimation, executive function, attention, learning, and multi-effector movement planning. CNPs are defined by the cognitive function they extract, not the cortical region from which the signals are recorded. However, some cortical areas may be better than others for particular applications. Signals can also be extracted in parallel from multiple cortical areas using multiple implants, which in many circumstances can increase the range of applications of CNPs. The CNP approach relies on scientific understanding of the neural processes involved in cognition, and many of the decoding algorithms it uses also have parallels to underlying neural circuit functions. 169 A nn u. R ev . P sy ch ol . 2 01 0. 61 :1 69 -1 90 . D ow nl oa de d fr om a rj ou rn al s. an nu al re vi ew s. or g by C al if or ni a In st itu te o f T ec hn ol og y on 0 1/ 03 /1 0. F or p er so na l u se o nl y. ANRV398-PS61-07 ARI 17 November 2009 19:51 Cognitive neural prosthetics (CNPs): instruments that consist of an array of electrodes, a decoding algorithm, and an external device controlled by the processed cognitive signal Decoding algorithms: computer algorithms that interpret neural signals for the purposes of understanding their function or for providing control signals to machines",
"title": ""
},
{
"docid": "neg:1840462_9",
"text": "Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon’s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score.",
"title": ""
},
{
"docid": "neg:1840462_10",
"text": "Utilization of Natural Fibers in Plastic Composites: Problems and Opportunities Roger M. Rowell, Anand R, Sanadi, Daniel F. Caulfield and Rodney E. Jacobson Forest Products Laboratory, ESDA, One Gifford Pinchot Drive, Madison, WI 53705 Department of Forestry, 1630 Linden Drive, University of Wisconsin, WI 53706 recycled. Results suggest that agro-based fibers are a viable alternative to inorganic/material based reinforcing fibers in commodity fiber-thermoplastic composite materials as long as the right processing conditions are used and for applications where higher water absorption may be so critical. These renewable fibers hav low densities and high specific properties and their non-abrasive nature permits a high volume of filling in the composite. Kenaf fivers, for example, have excellent specific properties and have potential to be outstanding reinforcing fillers in plastics. In our experiments, several types of natural fibers were blended with polyprolylene(PP) and then injection molded, with the fiber weight fractions varying to 60%. A compatibilizer or a coupling agent was used to improve the interaction and adhesion between the non-polar matrix and the polar lignocellulosic fibers. The specific tensile and flexural moduli of a 50% by weight (39% by volume) of kenaf-PP composites compares favorably with 40% by weight of glass fiber (19% by volume)-PP injection molded composites. Furthermore, prelimimary results sugget that natural fiber-PP composites can be regrounded and",
"title": ""
},
{
"docid": "neg:1840462_11",
"text": "Epigenetics is the study of heritable changesin gene expression that does not involve changes to theunderlying DNA sequence, i.e. a change in phenotype notinvolved by a change in genotype. At least three mainfactor seems responsible for epigenetic change including DNAmethylation, histone modification and non-coding RNA, eachone sharing having the same property to affect the dynamicof the chromatin structure by acting on Nucleosomes position. A nucleosome is a DNA-histone complex, where around150 base pairs of double-stranded DNA is wrapped. Therole of nucleosomes is to pack the DNA into the nucleusof the Eukaryote cells, to form the Chromatin. Nucleosomepositioning plays an important role in gene regulation andseveral studies shows that distinct DNA sequence featureshave been identified to be associated with nucleosomepresence. Starting from this suggestion, the identificationof nucleosomes on a genomic scale has been successfullyperformed by DNA sequence features representation andclassical supervised classification methods such as SupportVector Machines, Logistic regression and so on. Taking inconsideration the successful application of the deep neuralnetworks on several challenging classification problems, inthis paper we want to study how deep learning network canhelp in the identification of nucleosomes.",
"title": ""
},
{
"docid": "neg:1840462_12",
"text": "Artificial neural network (ANN) has been widely applied in flood forecasting and got good results. However, it can still not go beyond one or two hidden layers for the problematic non-convex optimization. This paper proposes a deep learning approach by integrating stacked autoencoders (SAE) and back propagation neural networks (BPNN) for the prediction of stream flow, which simultaneously takes advantages of the powerful feature representation capability of SAE and superior predicting capacity of BPNN. To further improve the non-linearity simulation capability, we first classify all the data into several categories by the K-means clustering. Then, multiple SAE-BP modules are adopted to simulate their corresponding categories of data. The proposed approach is respectively compared with the support-vector-machine (SVM) model, the BP neural network model, the RBF neural network model and extreme learning machine (ELM) model. The experimental results show that the SAE-BP integrated algorithm performs much better than other benchmarks.",
"title": ""
},
{
"docid": "neg:1840462_13",
"text": "Over recent years, interest has been growing in Bitcoin, an innovation which has the potential to play an important role in e-commerce and beyond. The aim of our paper is to provide a comprehensive empirical study of the payment and investment features of Bitcoin and their implications for the conduct of ecommerce. Since network externality theory suggests that the value of a network and its take-up are interlinked, we investigate both adoption and price formation. We discover that Bitcoin returns are driven primarily by its popularity, the sentiment expressed in newspaper reports on the cryptocurrency, and total number of transactions. The paper also reports on the first global survey of merchants who have adopted this technology and model the share of sales paid for with this alternative currency, using both ordinary and Tobit regressions. Our analysis examines how country, customer and company-specific characteristics interact with the proportion of sales attributed to Bitcoin. We find that company features, use of other payment methods, customers’ knowledge about Bitcoin, as well as the size of both the official and unofficial economy are significant determinants. The results presented allow a better understanding of the practical and theoretical ramifications of this innovation.",
"title": ""
},
{
"docid": "neg:1840462_14",
"text": "Android has provided dynamic code loading (DCL) since API level one. DCL allows an app developer to load additional code at runtime. DCL raises numerous challenges with regards to security and accountability analysis of apps. While previous studies have investigated DCL on Android, in this paper we formulate and answer three critical questions that are missing from previous studies: (1) Where does the loaded code come from (remotely fetched or locally packaged), and who is the responsible entity to invoke its functionality? (2) In what ways is DCL utilized to harden mobile apps, specifically, application obfuscation? (3) What are the security risks and implications that can be found from DCL in off-the-shelf apps? We design and implement DYDROID, a system which uses both dynamic and static analysis to analyze dynamically loaded code. Dynamic analysis is used to automatically exercise apps, capture DCL behavior, and intercept the loaded code. Static analysis is used to investigate malicious behavior and privacy leakage in that dynamically loaded code. We have used DYDROID to analyze over 46K apps with little manual intervention, allowing us to conduct a large-scale measurement to investigate five aspects of DCL, such as source identification, malware detection, vulnerability analysis, obfuscation analysis, and privacy tracking analysis. We have several interesting findings. (1) 27 apps are found to violate the content policy of Google Play by executing code downloaded from remote servers. (2) We determine the distribution, pros/cons, and implications of several common obfuscation methods, including DEX encryption/loading. (3) DCL’s stealthiness enables it to be a channel to deploy malware, and we find 87 apps loading malicious binaries which are not detected by existing antivirus tools. (4) We found 14 apps that are vulnerable to code injection attacks due to dynamically loading code which is writable by other apps. (5) DCL is mainly used by third-party SDKs, meaning that app developers may not know what sort of sensitive functionality is injected into their apps.",
"title": ""
},
{
"docid": "neg:1840462_15",
"text": "PURPOSE OF REVIEW\nThe current review discusses the integration of guideline and evidence-based palliative care into heart failure end-of-life (EOL) care.\n\n\nRECENT FINDINGS\nNorth American and European heart failure societies recommend the integration of palliative care into heart failure programs. Advance care planning, shared decision-making, routine measurement of symptoms and quality of life and specialist palliative care at heart failure EOL are identified as key components to an effective heart failure palliative care program. There is limited evidence to support the effectiveness of the individual elements. However, results from the palliative care in heart failure trial suggest an integrated heart failure palliative care program can significantly improve quality of life for heart failure patients at EOL.\n\n\nSUMMARY\nIntegration of a palliative approach to heart failure EOL care helps to ensure patients receive the care that is congruent with their values, wishes and preferences. Specialist palliative care referrals are limited to those who are truly at heart failure EOL.",
"title": ""
},
{
"docid": "neg:1840462_16",
"text": "This study examined the effects of heavy resistance training on physiological acute exercise-induced fatigue (5 × 10 RM leg press) changes after two loading protocols with the same relative intensity (%) (5 × 10 RMRel) and the same absolute load (kg) (5 × 10 RMAbs) as in pretraining in men (n = 12). Exercise-induced neuromuscular (maximal strength and muscle power output), acute cytokine and hormonal adaptations (i.e., total and free testosterone, cortisol, growth hormone (GH), insulin-like growth factor-1 (IGF-1), IGF binding protein-3 (IGFBP-3), interleukin-1 receptor antagonist (IL-1ra), IL-1β, IL-6, and IL-10 and metabolic responses (i.e., blood lactate) were measured before and after exercise. The resistance training induced similar acute responses in serum cortisol concentration but increased responses in anabolic hormones of FT and GH, as well as inflammation-responsive cytokine IL-6 and the anti-inflammatory cytokine IL-10, when the same relative load was used. This response was balanced by a higher release of pro-inflammatory cytokines IL-1β and cytokine inhibitors (IL-1ra) when both the same relative and absolute load was used after training. This enhanced hormonal and cytokine response to strength exercise at a given relative exercise intensity after strength training occurred with greater accumulated fatigue and metabolic demand (i.e., blood lactate accumulation). The magnitude of metabolic demand or the fatigue experienced during the resistance exercise session influences the hormonal and cytokine response patterns. Similar relative intensities may elicit not only higher exercise-induced fatigue but also an increased acute hormonal and cytokine response during the initial phase of a resistance training period.",
"title": ""
},
{
"docid": "neg:1840462_17",
"text": "Automated Lymph Node (LN) detection is an important clinical diagnostic task but very challenging due to the low contrast of surrounding structures in Computed Tomography (CT) and to their varying sizes, poses, shapes and sparsely distributed locations. State-of-the-art studies show the performance range of 52.9% sensitivity at 3.1 false-positives per volume (FP/vol.), or 60.9% at 6.1 FP/vol. for mediastinal LN, by one-shot boosting on 3D HAAR features. In this paper, we first operate a preliminary candidate generation stage, towards -100% sensitivity at the cost of high FP levels (-40 per patient), to harvest volumes of interest (VOI). Our 2.5D approach consequently decomposes any 3D VOI by resampling 2D reformatted orthogonal views N times, via scale, random translations, and rotations with respect to the VOI centroid coordinates. These random views are then used to train a deep Convolutional Neural Network (CNN) classifier. In testing, the CNN is employed to assign LN probabilities for all N random views that can be simply averaged (as a set) to compute the final classification probability per VOI. We validate the approach on two datasets: 90 CT volumes with 388 mediastinal LNs and 86 patients with 595 abdominal LNs. We achieve sensitivities of 70%/83% at 3 FP/vol. and 84%/90% at 6 FP/vol. in mediastinum and abdomen respectively, which drastically improves over the previous state-of-the-art work.",
"title": ""
},
{
"docid": "neg:1840462_18",
"text": "Worldwide medicinal use of cannabis is rapidly escalating, despite limited evidence of its efficacy from preclinical and clinical studies. Here we show that cannabidiol (CBD) effectively reduced seizures and autistic-like social deficits in a well-validated mouse genetic model of Dravet syndrome (DS), a severe childhood epilepsy disorder caused by loss-of-function mutations in the brain voltage-gated sodium channel NaV1.1. The duration and severity of thermally induced seizures and the frequency of spontaneous seizures were substantially decreased. Treatment with lower doses of CBD also improved autistic-like social interaction deficits in DS mice. Phenotypic rescue was associated with restoration of the excitability of inhibitory interneurons in the hippocampal dentate gyrus, an important area for seizure propagation. Reduced excitability of dentate granule neurons in response to strong depolarizing stimuli was also observed. The beneficial effects of CBD on inhibitory neurotransmission were mimicked and occluded by an antagonist of GPR55, suggesting that therapeutic effects of CBD are mediated through this lipid-activated G protein-coupled receptor. Our results provide critical preclinical evidence supporting treatment of epilepsy and autistic-like behaviors linked to DS with CBD. We also introduce antagonism of GPR55 as a potential therapeutic approach by illustrating its beneficial effects in DS mice. Our study provides essential preclinical evidence needed to build a sound scientific basis for increased medicinal use of CBD.",
"title": ""
},
{
"docid": "neg:1840462_19",
"text": "Arti cial neural networks are being used with increasing frequency for high dimensional problems of regression or classi cation. This article provides a tutorial overview of neural networks, focusing on back propagation networks as a method for approximating nonlinear multivariable functions. We explain, from a statistician's vantage point, why neural networks might be attractive and how they compare to other modern regression techniques.",
"title": ""
}
] |
1840463 | Securing Embedded User Interfaces: Android and Beyond | [
{
"docid": "pos:1840463_0",
"text": "Conflicts between security and usability goals can be avoided by considering the goals together throughout an iterative design process. A successful design involves addressing users' expectations and inferring authorization based on their acts of designation.",
"title": ""
}
] | [
{
"docid": "neg:1840463_0",
"text": "Thyroid gland is butterfly shaped organ which consists of two cone lobes and belongs to the endocrine system. It lies in front of the neck below the adams apple. Thyroid disorders are some kind of abnormalities in thyroid gland which can give rise to nodules like hypothyroidism, hyperthyroidism, goiter, benign and malignant etc. Ultrasound (US) is one among the hugely used modality to detect the thyroid disorders because it has some benefits over other techniques like non-invasiveness, low cost, free of ionizing radiations etc. This paper provides a concise overview about segmentation of thyroid nodules and importance of neural networks comparative to other techniques.",
"title": ""
},
{
"docid": "neg:1840463_1",
"text": "Since the publication of the Design Patterns book, a large number of object-oriented design patterns have been identified and codified. As part of the pattern form, objectoriented design patterns must indicate their relationships with other patterns, but these relationships are typically described very briefly, and different collections of patterns describe different relationships in different ways. In this paper we describe and classify the common relationships between object oriented design patterns. Practitioners can use these relationships to help them identity those patterns which may be applicable to a particular problem, and pattern writers can use these relationships to help them integrate new patterns into the body of the patterns literature.",
"title": ""
},
{
"docid": "neg:1840463_2",
"text": "By using a sparse representation or low-rank representation of data, the graph-based subspace clustering has recently attracted considerable attention in computer vision, given its capability and efficiency in clustering data. However, the graph weights built using the representation coefficients are not the exact ones as the traditional definition is in a deterministic way. The two steps of representation and clustering are conducted in an independent manner, thus an overall optimal result cannot be guaranteed. Furthermore, it is unclear how the clustering performance will be affected by using this graph. For example, the graph parameters, i.e., the weights on edges, have to be artificially pre-specified while it is very difficult to choose the optimum. To this end, in this paper, a novel subspace clustering via learning an adaptive low-rank graph affinity matrix is proposed, where the affinity matrix and the representation coefficients are learned in a unified framework. As such, the pre-computed graph regularizer is effectively obviated and better performance can be achieved. Experimental results on several famous databases demonstrate that the proposed method performs better against the state-of-the-art approaches, in clustering.",
"title": ""
},
{
"docid": "neg:1840463_3",
"text": "Despite the recent successes in machine learning, there remain many open challenges. Arguably one of the most important and interesting open research problems is that of data efficiency. Supervised machine learning models, and especially deep neural networks, are notoriously data hungry, often requiring millions of labeled examples to achieve desired performance. However, labeled data is often expensive or difficult to obtain, hindering advances in interesting and important domains. What avenues might we pursue to increase the data efficiency of machine learning models? One approach is semi-supervised learning. In contrast to labeled data, unlabeled data is often easy and inexpensive to obtain. Semi-supervised learning is concerned with leveraging unlabeled data to improve performance in supervised tasks. Another approach is active learning: in the presence of a labeling mechanism (oracle), how can we choose examples to be labeled in a way that maximizes the gain in performance? In this thesis we are concerned with developing models that enable us to improve data efficiency of powerful models by jointly pursuing both of these approaches. Deep generative models parameterized by neural networks have emerged recently as powerful and flexible tools for unsupervised learning. They are especially useful for modeling high-dimensional and complex data. We propose a deep generative model with a discriminative component. By including the discriminative component in the model, after training is complete the model is used for classification rather than variational approximations. The model further includes stochastic inputs of arbitrary dimension for increased flexibility and expressiveness. We leverage the stochastic layer to learn representations of the data which naturally accommodate semi-supervised learning. We develop an efficient Gibbs sampling procedure to marginalize the stochastic inputs while inferring labels. We extend the model to include uncertainty in the weights, allowing us to explicitly capture model uncertainty, and demonstrate how this allows us to use the model for active learning as well as semi-supervised learning. I would like to dedicate this thesis to my loving wife, parents, and sister . . .",
"title": ""
},
{
"docid": "neg:1840463_4",
"text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.",
"title": ""
},
{
"docid": "neg:1840463_5",
"text": "The difference between a computer game and a simulator can be a small one both require the same capabilities from the computer: realistic graphics, behavior consistent with the laws of physics, a variety of scenarios where difficulties can emerge, and some assessment technique to inform users of performance. Computer games are a multi-billion dollar industry in the United States, and as the production costs and complexity of games have increased, so has the effort to make their creation easier. Commercial software products have been developed to greatly simpl ify the game-making process, allowing developers to focus on content rather than on programming. This paper investigates Unity3D game creation software for making threedimensional engine-room simulators. Unity3D is arguably the best software product for game creation, and has been used for numerous popular and successful commercial games. Maritime universities could greatly benefit from making custom simulators to fit specific applications and requirements, as well as from reducing the cost of purchasing simulators. We use Unity3D to make a three-dimensional steam turbine simulator that achieves a high degree of realism. The user can walk around the turbine, open and close valves, activate pumps, and run the turbine. Turbine operating parameters such as RPM, condenser vacuum, lube oil temperature. and governor status are monitored. In addition, the program keeps a log of any errors made by the operator. We find that with the use of Unity3D, students and faculty are able to make custom three-dimensional ship and engine room simulators that can be used as training and evaluation tools.",
"title": ""
},
{
"docid": "neg:1840463_6",
"text": "Self-organizing models constitute valuable tools for data visualization, clustering, and data mining. Here, we focus on extensions of basic vector-based models by recursive computation in such a way that sequential and tree-structured data can be processed directly. The aim of this article is to give a unified review of important models recently proposed in literature, to investigate fundamental mathematical properties of these models, and to compare the approaches by experiments. We first review several models proposed in literature from a unifying perspective, thereby making use of an underlying general framework which also includes supervised recurrent and recursive models as special cases. We shortly discuss how the models can be related to different neuron lattices. Then, we investigate theoretical properties of the models in detail: we explicitly formalize how structures are internally stored in different context models and which similarity measures are induced by the recursive mapping onto the structures. We assess the representational capabilities of the models, and we shortly discuss the issues of topology preservation and noise tolerance. The models are compared in an experiment with time series data. Finally, we add an experiment for one context model for tree-structured data to demonstrate the capability to process complex structures.",
"title": ""
},
{
"docid": "neg:1840463_7",
"text": "Identifying the object that attracts human visual attention is an essential function for automatic services in smart environments. However, existing solutions can compute the gaze direction without providing the distance to the target. In addition, most of them rely on special devices or infrastructure support. This paper explores the possibility of using a smartphone to detect the visual attention of a user. By applying the proposed VADS system, acquiring the location of the intended object only requires one simple action: gazing at the intended object and holding up the smartphone so that the object as well as user's face can be simultaneously captured by the front and rear cameras. We extend the current advances of computer vision to develop efficient algorithms to obtain the distance between the camera and user, the user's gaze direction, and the object's direction from camera. The object's location can then be computed by solving a trigonometric problem. VADS has been prototyped on commercial off-the-shelf (COTS) devices. Extensive evaluation results show that VADS achieves low error (about 1.5° in angle and 0.15m in distance for objects within 12m) as well as short latency. We believe that VADS enables a large variety of applications in smart environments.",
"title": ""
},
{
"docid": "neg:1840463_8",
"text": "Human eye gaze is a strong candidate to create a new application area based on human-computer interaction. To implement a really practical gaze-based interaction system, gaze detection must be realized without placing any restriction on the user's behavior or comfort. This paper describes a gaze tracking system that offers freehead, simple personal calibration. It does not require the user wear anything on her head, and she can move her head freely. Personal calibration takes only a very short time; the user is asked to look at two markers on the screen. An experiment shows that the accuracy of the implemented system is about 1.0 degrees (view angle).",
"title": ""
},
{
"docid": "neg:1840463_9",
"text": "An increasingly important challenge in data analytics is dirty data in the form of missing, duplicate, incorrect, or inconsistent values. In the SampleClean project, we have developed a new suite of algorithms to estimate the results of different types of analytic queries after applying data cleaning only to a sample. First, this article describes methods for computing statistically bounded estimates of SUM, COUNT, and AVG queries from samples of data corrupted with duplications and incorrect values. Some types of data error, such as duplication, can affect sampling probabilities so results have to be re-weighted to compensate for biases. Then it presents an application of these query processing and data cleaning methods to materialized views maintenance. The view cleaning algorithm applies hashing to efficiently maintain a uniform sample of rows in a materialized view, and then dirty data query processing techniques to correct stale query results. Finally, the article describes a gradient-descent algorithm that extends this idea to the increasingly common Machine Learning-based analytics.",
"title": ""
},
{
"docid": "neg:1840463_10",
"text": "English-speaking children with specific language impairment (SLI) are known to have particular difficulty with the acquisition of grammatical morphemes that carry tense and agreement features, such as the past tense -ed and third-person singular present -s. In this study, an Extended Optional Infinitive (EOI) account of SLI is evaluated. In this account, -ed, -s, BE, and DO are regarded as finiteness markers. This model predicts that finiteness markers are omitted for an extended period of time for nonimpaired children, and that this period will be extended for a longer time in children with SLI. At the same time, it predicts that if finiteness markers are present, they will be used correctly. These predictions are tested in this study. Subjects were 18 5-year-old children with SLI with expressive and receptive language deficits and two comparison groups of children developing language normally: 22 CA-equivalent (5N) and 20 younger, MLU-equivalent children (3N). It was found that the children with SLI used nonfinite forms of lexical verbs, or omitted BE and DO, more frequently than children in the 5N and 3N groups. At the same time, like the normally developing children, when the children with SLI marked finiteness, they did so appropriately. Most strikingly, the SLI group was highly accurate in marking agreement on BE and DO forms. The findings are discussed in terms of the predictions of the EOI model, in comparison to other models of the grammatical limitations of children with SLI.",
"title": ""
},
{
"docid": "neg:1840463_11",
"text": "Traditional citation analysis has been widely applied to detect patterns of scientific collaboration, map the landscapes of scholarly disciplines, assess the impact of research outputs, and observe knowledge transfer across domains. It is, however, limited, as it assumes all citations are of similar value and weights each equally. Content-based citation analysis (CCA) addresses a citation’s value by interpreting each one based on its context at both the syntactic and semantic levels. This paper provides a comprehensive overview of CAA research in terms of its theoretical foundations, methodical approaches, and example applications. In addition, we highlight how increased computational capabilities and publicly available full-text resources have opened this area of research to vast possibilities, which enable deeper citation analysis, more accurate citation prediction, and increased knowledge discovery.",
"title": ""
},
{
"docid": "neg:1840463_12",
"text": "BACKGROUND\nIndividuals with autism spectrum disorders (ASDs) often display symptoms from other diagnostic categories. Studies of clinical and psychosocial outcome in adult patients with ASDs without concomitant intellectual disability are few. The objective of this paper is to describe the clinical psychiatric presentation and important outcome measures of a large group of normal-intelligence adult patients with ASDs.\n\n\nMETHODS\nAutistic symptomatology according to the DSM-IV-criteria and the Gillberg & Gillberg research criteria, patterns of comorbid psychopathology and psychosocial outcome were assessed in 122 consecutively referred adults with normal intelligence ASDs. The subjects consisted of 5 patients with autistic disorder (AD), 67 with Asperger's disorder (AS) and 50 with pervasive developmental disorder not otherwise specified (PDD NOS). This study group consists of subjects pooled from two studies with highly similar protocols, all seen on an outpatient basis by one of three clinicians.\n\n\nRESULTS\nCore autistic symptoms were highly prevalent in all ASD subgroups. Though AD subjects had the most pervasive problems, restrictions in non-verbal communication were common across all three subgroups and, contrary to current DSM criteria, so were verbal communication deficits. Lifetime psychiatric axis I comorbidity was very common, most notably mood and anxiety disorders, but also ADHD and psychotic disorders. The frequency of these diagnoses did not differ between the ASD subgroups or between males and females. Antisocial personality disorder and substance abuse were more common in the PDD NOS group. Of all subjects, few led an independent life and very few had ever had a long-term relationship. Female subjects more often reported having been bullied at school than male subjects.\n\n\nCONCLUSION\nASDs are clinical syndromes characterized by impaired social interaction and non-verbal communication in adulthood as well as in childhood. They also carry a high risk for co-existing mental health problems from a broad spectrum of disorders and for unfavourable psychosocial life circumstances. For the next revision of DSM, our findings especially stress the importance of careful examination of the exclusion criterion for adult patients with ASDs.",
"title": ""
},
{
"docid": "neg:1840463_13",
"text": "Device-to-Device (D2D) communication has emerged as a promising technology for optimizing spectral efficiency in future cellular networks. D2D takes advantage of the proximity of communicating devices for efficient utilization of available resources, improving data rates, reducing latency, and increasing system capacity. The research community is actively investigating the D2D paradigm to realize its full potential and enable its smooth integration into the future cellular system architecture. Existing surveys on this paradigm largely focus on interference and resource management. We review recently proposed solutions in over explored and under explored areas in D2D. These solutions include protocols, algorithms, and architectures in D2D. Furthermore, we provide new insights on open issues in these areas. Finally, we discuss potential future research directions.",
"title": ""
},
{
"docid": "neg:1840463_14",
"text": ".................................................................................................................................... iii Acknowledgments..................................................................................................................... iv Table of",
"title": ""
},
{
"docid": "neg:1840463_15",
"text": "ISSN: 2167-0811 (Print) 2167-082X (Online) Journal homepage: http://www.tandfonline.com/loi/rdij20 Algorithmic Transparency in the News Media Nicholas Diakopoulos & Michael Koliska To cite this article: Nicholas Diakopoulos & Michael Koliska (2016): Algorithmic Transparency in the News Media, Digital Journalism, DOI: 10.1080/21670811.2016.1208053 To link to this article: http://dx.doi.org/10.1080/21670811.2016.1208053",
"title": ""
},
{
"docid": "neg:1840463_16",
"text": "Sparse representations of text such as bag-ofwords models or extended explicit semantic analysis (ESA) representations are commonly used in many NLP applications. However, for short texts, the similarity between two such sparse vectors is not accurate due to the small term overlap. While there have been multiple proposals for dense representations of words, measuring similarity between short texts (sentences, snippets, paragraphs) requires combining these token level similarities. In this paper, we propose to combine ESA representations and word2vec representations as a way to generate denser representations and, consequently, a better similarity measure between short texts. We study three densification mechanisms that involve aligning sparse representation via many-to-many, many-to-one, and oneto-one mappings. We then show the effectiveness of these mechanisms on measuring similarity between short texts.",
"title": ""
},
{
"docid": "neg:1840463_17",
"text": "While sentiment and emotion analysis has received a considerable amount of research attention, the notion of understanding and detecting the intensity of emotions is relatively less explored. This paper describes a system developed for predicting emotion intensity in tweets. Given a Twitter message, CrystalFeel uses features derived from parts-of-speech, ngrams, word embedding, and multiple affective lexicons including Opinion Lexicon, SentiStrength, AFFIN, NRC Emotion & Hash Emotion, and our in-house developed EI Lexicons to predict the degree of the intensity associated with fear, anger, sadness, and joy in the tweet. We found that including the affective lexicons-based features allowed the system to obtain strong prediction performance, while revealing interesting emotion word-level and message-level associations. On gold test data, CrystalFeel obtained Pearson correlations of .717 on average emotion intensity and of .816 on sentiment intensity.",
"title": ""
},
{
"docid": "neg:1840463_18",
"text": "Mutation analysis evaluates a testing technique by measur- ing how well it detects seeded faults (mutants). Mutation analysis is hampered by inherent scalability problems — a test suite is executed for each of a large number of mutants. Despite numerous optimizations presented in the literature, this scalability issue remains, and this is one of the reasons why mutation analysis is hardly used in practice. Whereas most previous optimizations attempted to stati- cally reduce the number of executions or their computational overhead, this paper exploits information available only at run time to further reduce the number of executions. First, state infection conditions can reveal — with a single test execution of the unmutated program — which mutants would lead to a different state, thus avoiding unnecessary test executions. Second, determining whether an infected execution state propagates can further reduce the number of executions. Mutants that are embedded in compound expressions may infect the state locally without affecting the outcome of the compound expression. Third, those mutants that do infect the state can be partitioned based on the resulting infected state — if two mutants lead to the same infected state, only one needs to be executed as the result of the other can be inferred. We have implemented these optimizations in the Major mu- tation framework and empirically evaluated them on 14 open source programs. The optimizations reduced the mutation analysis time by 40% on average.",
"title": ""
}
] |
1840464 | A fully-adaptive wideband 0.5–32.75Gb/s FPGA transceiver in 16nm FinFET CMOS technology | [
{
"docid": "pos:1840464_0",
"text": "The introduction of high-speed backplane transceivers inside FPGAs has addressed critical issues such as the ease in scalability of performance, high availability, flexible architectures, the use of standards, and rapid time to market. These have been crucial to address the ever-increasing demand for bandwidth in communication and storage systems [1-3], requiring novel techniques in receiver (RX) and clocking circuits.",
"title": ""
}
] | [
{
"docid": "neg:1840464_0",
"text": "This paper presents a novel design of cylindrical modified Luneberg lens antenna at millimeter-wave (mm-wave) frequencies in which no dielectric is needed as lens material. The cylindrical modified Luneberg lens consists of two air-filled, almost-parallel plates whose spacing continuously varies with the radius to simulate the general Luneberg's Law. A planar antipodal linearly-tapered slot antenna (ALTSA) is placed between the parallel plates at the focal position of the lens as a feed antenna. A combined ray-optics/diffraction method and CST-MWS are used to analyze and design this lens antenna. Measured results of a fabricated cylindrical modified Luneberg lens with a diameter of 100 mm show good agreement with theoretical predictions. At the design frequency of 30 GHz, the measured 3-dB E- and H-plane beamwidths are 8.6° and 68°, respectively. The first sidelobe level in the E-plane is -20 dB, and the cross-polarization is -28 dB below peak. The measured aperture efficiency is 68% at 30 GHz, and varies between 50% and 71% over the tested frequency band of 29-32 GHz. Due to its rotational symmetry, this lens can be used to launch multiple beams by implementing an arc array of planar ALTSA elements at the periphery of the lens. A 21-element antenna array with a -3-D dB beam crossover and a scan angle of 180° is demonstrated. The measured overall scan coverage is up to ±80° with gain drop less than -3 dB.",
"title": ""
},
{
"docid": "neg:1840464_1",
"text": "Introduction: The causal relation between tongue thrust swallowing or habit and development of anterior open bite continues to be made in clinical orthodontics yet studies suggest a lack of evidence to support a cause and effect. Treatment continues to be directed towards closing the anterior open bite frequently with surgical intervention to reposition the maxilla and mandible. This case report illustrates a highly successful non-surgical orthodontic treatment without extractions.",
"title": ""
},
{
"docid": "neg:1840464_2",
"text": "AIM\nTo examine the relationship between calf circumference and muscle mass, and to evaluate the suitability of calf circumference as a surrogate marker of muscle mass for the diagnosis of sarcopenia among middle-aged and older Japanese men and women.\n\n\nMETHODS\nA total of 526 adults aged 40-89 years participated in the present cross-sectional study. The maximum calf circumference was measured in a standing position. Appendicular skeletal muscle mass was measured using dual-energy X-ray absorptiometry, and the skeletal muscle index was calculated as appendicular skeletal muscle mass divided by the square of the height (kg/m(2)). The cut-off values for sarcopenia were defined as a skeletal muscle index of less than -2 standard deviations of the mean value for Japanese young adults, as defined previously.\n\n\nRESULTS\nCalf circumference was positively correlated with appendicular skeletal muscle (r = 0.81 in men, r = 0.73 in women) and skeletal muscle index (r = 0.80 in men, r = 0.69 in women). In receiver operating characteristic analysis, the optimal calf circumference cut-off values for predicting sarcopenia were 34 cm (sensitivity 88%, specificity 91%) in men and 33 cm (sensitivity 76%, specificity 73%) in women.\n\n\nCONCLUSIONS\nCalf circumference was positively correlated with appendicular skeletal muscle mass and skeletal muscle index, and could be used as a surrogate marker of muscle mass for diagnosing sarcopenia. The suggested cut-off values of calf circumference for predicting low muscle mass are <34 cm in men and <33 cm in women.",
"title": ""
},
{
"docid": "neg:1840464_3",
"text": "Interconnect architectures which leverage high-bandwidth optical channels offer a promising solution to address the increasing chip-to-chip I/O bandwidth demands. This paper describes a dense, high-speed, and low-power CMOS optical interconnect transceiver architecture. Vertical-cavity surface-emitting laser (VCSEL) data rate is extended for a given average current and corresponding reliability level with a four-tap current summing FIR transmitter. A low-voltage integrating and double-sampling optical receiver front-end provides adequate sensitivity in a power efficient manner by avoiding linear high-gain elements common in conventional transimpedance-amplifier (TIA) receivers. Clock recovery is performed with a dual-loop architecture which employs baud-rate phase detection and feedback interpolation to achieve reduced power consumption, while high-precision phase spacing is ensured at both the transmitter and receiver through adjustable delay clock buffers. A prototype chip fabricated in 1 V 90 nm CMOS achieves 16 Gb/s operation while consuming 129 mW and occupying 0.105 mm2.",
"title": ""
},
{
"docid": "neg:1840464_4",
"text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.",
"title": ""
},
{
"docid": "neg:1840464_5",
"text": "BACKGROUND\nIn 2010, the World Health Organization published benchmarks for training in osteopathy in which osteopathic visceral techniques are included. The purpose of this study was to identify and critically appraise the scientific literature concerning the reliability of diagnosis and the clinical efficacy of techniques used in visceral osteopathy.\n\n\nMETHODS\nDatabases MEDLINE, OSTMED.DR, the Cochrane Library, Osteopathic Research Web, Google Scholar, Journal of American Osteopathic Association (JAOA) website, International Journal of Osteopathic Medicine (IJOM) website, and the catalog of Académie d'ostéopathie de France website were searched through December 2017. Only inter-rater reliability studies including at least two raters or the intra-rater reliability studies including at least two assessments by the same rater were included. For efficacy studies, only randomized-controlled-trials (RCT) or crossover studies on unhealthy subjects (any condition, duration and outcome) were included. Risk of bias was determined using a modified version of the quality appraisal tool for studies of diagnostic reliability (QAREL) in reliability studies. For the efficacy studies, the Cochrane risk of bias tool was used to assess their methodological design. Two authors performed data extraction and analysis.\n\n\nRESULTS\nEight reliability studies and six efficacy studies were included. The analysis of reliability studies shows that the diagnostic techniques used in visceral osteopathy are unreliable. Regarding efficacy studies, the least biased study shows no significant difference for the main outcome. The main risks of bias found in the included studies were due to the absence of blinding of the examiners, an unsuitable statistical method or an absence of primary study outcome.\n\n\nCONCLUSIONS\nThe results of the systematic review lead us to conclude that well-conducted and sound evidence on the reliability and the efficacy of techniques in visceral osteopathy is absent.\n\n\nTRIAL REGISTRATION\nThe review is registered PROSPERO 12th of December 2016. Registration number is CRD4201605286 .",
"title": ""
},
{
"docid": "neg:1840464_6",
"text": "The number of crime incidents that is reported per day in India is increasing dramatically. The criminals today use various advanced technologies and commit crimes in really tactful ways. This makes crime investigation a more complicated process. Thus the police officers have to perform a lot of manual tasks to get a thread for investigation. This paper deals with the study of data mining based systems for analyzing crime information and thus automates the crime investigation procedure of the police officers. The majority of these frameworks utilize a blend of data mining methods such as clustering and classification for the effective investigation of the criminal acts.",
"title": ""
},
{
"docid": "neg:1840464_7",
"text": "We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to ngram-based scores while providing more relevant outputs.",
"title": ""
},
{
"docid": "neg:1840464_8",
"text": "Recent studies show that more than 86% of Internet paths allow well-designed TCP extensions, meaning that it is still possible to deploy transport layer improvements despite the existence of middleboxes in the network. Hence, the blame for the slow evolution of protocols (with extensions taking many years to nbecome widely used) should be placed on end systems.\n In this paper, we revisit the case for moving protocols stacks up into user space in order to ease the deployment of new protocols, extensions, or performance optimizations. We present MultiStack, operating system support for user-level protocol stacks. MultiStack runs within commodity operating systems, can concurrently host a large number of isolated stacks, has a fall-back path to the legacy host stack, and is able to process packets at rates of 10Gb/s.\n We validate our design by showing that our mux/demux layer can validate and switch packets at line rate (up to 14.88 Mpps) on a 10 Gbit port using 1-2 cores, and that a proof-of-concept HTTP server running over a basic userspace TCP outperforms by 18-90% both the same server and nginx running over the kernel's stack.",
"title": ""
},
{
"docid": "neg:1840464_9",
"text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.",
"title": ""
},
{
"docid": "neg:1840464_10",
"text": "Packaging appearance is extremely important in cigarette manufacturing. Typically, there are two types of cigarette packaging defects: (1) cigarette laying defects such as incorrect cigarette numbers and irregular layout; (2) tin paper handle defects such as folded paper handles. In this paper, an automated vision-based defect inspection system is designed for cigarettes packaged in tin containers. The first type of defects is inspected by counting the number of cigarettes in a tin container. First k-means clustering is performed to segment cigarette regions. After noise filtering, valid cigarette regions are identified by estimating individual cigarette area using linear regression. The k clustering centers and area estimation function are learned off-line on training images. The second kind of defect is detected by checking the segmented paper handle region. Experimental results on 500 test images demonstrate the effectiveness of the proposed inspection system. The proposed method also contributes to the general detection and classification system such as identifying mitosis in early diagnosis of cervical cancer.",
"title": ""
},
{
"docid": "neg:1840464_11",
"text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.",
"title": ""
},
{
"docid": "neg:1840464_12",
"text": "This paper introduces a novel spectral framework for solving Markov decision processes (MDPs) by jointly learning representations and optimal policies. The major components of the framework described in this paper include: (i) A general scheme for constructing representations or basis functions by diagonalizing symmetric diffusion operators (ii) A specific instantiation of this approach where global basis functions called proto-value functions (PVFs) are formed using the eigenvectors of the graph Laplacian on an undirected graph formed from state transitions induced by the MDP (iii) A three-phased procedure called representation policy iteration comprising of a sample collection phase, a representation learning phase that constructs basis functions from samples, and a final parameter estimation phase that determines an (approximately) optimal policy within the (linear) subspace spanned by the (current) basis functions. (iv) A specific instantiation of the RPI framework using least-squares policy iteration (LSPI) as the parameter estimation method (v) Several strategies for scaling the proposed approach to large discrete and continuous state spaces, including the Nyström extension for out-of-sample interpolation of eigenfunctions, and the use of Kronecker sum factorization to construct compact eigenfunctions in product spaces such as factored MDPs (vi) Finally, a series of illustrative discrete and continuous control tasks, which both illustrate the concepts and provide a benchmark for evaluating the proposed approach. Many challenges remain to be addressed in scaling the proposed framework to large MDPs, and several elaboration of the proposed framework are briefly summarized at the end.",
"title": ""
},
{
"docid": "neg:1840464_13",
"text": "Modern heuristics or metaheuristics are optimization algorithms that have been increasingly used during the last decades to support complex decision-making in a number of fields, such as logistics and transportation, telecommunication networks, bioinformatics, finance, and the like. The continuous increase in computing power, together with advancements in metaheuristics frameworks and parallelization strategies, are empowering these types of algorithms as one of the best alternatives to solve rich and real-life combinatorial optimization problems that arise in a number of financial and banking activities. This article reviews some of the works related to the use of metaheuristics in solving both classical and emergent problems in the finance arena. A non-exhaustive list of examples includes rich portfolio optimization, index tracking, enhanced indexation, credit risk, stock investments, financial project scheduling, option pricing, feature selection, bankruptcy and financial distress prediction, and credit risk assessment. This article also discusses some open opportunities for researchers in the field, and forecast the evolution of metaheuristics to include real-life uncertainty conditions into the optimization problems being considered.",
"title": ""
},
{
"docid": "neg:1840464_14",
"text": "Despite significant advances in image segmentation techniques, evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.",
"title": ""
},
{
"docid": "neg:1840464_15",
"text": "There are many advantages of using high frequency PWM (in the range of 50 to 100 kHz) in motor drive applications. High motor efficiency, fast control response, lower motor torque ripple, close to ideal sinusoidal motor current waveform, smaller filter size, lower cost filter, etc. are a few of the advantages. However, higher frequency PWM is also associated with severe voltage reflection and motor insulation breakdown issues at the motor terminals. If standard Si IGBT based inverters are employed, losses in the switches make it difficult to overcome significant drop in efficiency of converting electrical power to mechanical power. Work on SiC and GaN based inverter has progressed and variable frequency drives (VFDs) can now be operated efficiently at carrier frequencies in the 50 to 200 kHz range, using these devices. Using soft magnetic material, the overall efficiency of filtering can be improved. The switching characteristics of SiC and GaN devices are such that even at high switching frequency, the turn on and turn off losses are minimal. Hence, there is not much penalty in increasing the carrier frequency of the VFD. Losses in AC motors due to PWM waveform are significantly reduced. All the above features put together improves system efficiency. This paper presents results obtained on using a 6-in-1 GaN module for VFD application, operating at a carrier frequency of 100 kHz with an output sine wave filter. Experimental results show the improvement in motor efficiency and system efficiency on using a GaN based VFD in comparison to the standard Si IGBT based VFD.",
"title": ""
},
{
"docid": "neg:1840464_16",
"text": "CONTEXT\nPrimary care physicians report high levels of distress, which is linked to burnout, attrition, and poorer quality of care. Programs to reduce burnout before it results in impairment are rare; data on these programs are scarce.\n\n\nOBJECTIVE\nTo determine whether an intensive educational program in mindfulness, communication, and self-awareness is associated with improvement in primary care physicians' well-being, psychological distress, burnout, and capacity for relating to patients.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nBefore-and-after study of 70 primary care physicians in Rochester, New York, in a continuing medical education (CME) course in 2007-2008. The course included mindfulness meditation, self-awareness exercises, narratives about meaningful clinical experiences, appreciative interviews, didactic material, and discussion. An 8-week intensive phase (2.5 h/wk, 7-hour retreat) was followed by a 10-month maintenance phase (2.5 h/mo).\n\n\nMAIN OUTCOME MEASURES\nMindfulness (2 subscales), burnout (3 subscales), empathy (3 subscales), psychosocial orientation, personality (5 factors), and mood (6 subscales) measured at baseline and at 2, 12, and 15 months.\n\n\nRESULTS\nOver the course of the program and follow-up, participants demonstrated improvements in mindfulness (raw score, 45.2 to 54.1; raw score change [Delta], 8.9; 95% confidence interval [CI], 7.0 to 10.8); burnout (emotional exhaustion, 26.8 to 20.0; Delta = -6.8; 95% CI, -4.8 to -8.8; depersonalization, 8.4 to 5.9; Delta = -2.5; 95% CI, -1.4 to -3.6; and personal accomplishment, 40.2 to 42.6; Delta = 2.4; 95% CI, 1.2 to 3.6); empathy (116.6 to 121.2; Delta = 4.6; 95% CI, 2.2 to 7.0); physician belief scale (76.7 to 72.6; Delta = -4.1; 95% CI, -1.8 to -6.4); total mood disturbance (33.2 to 16.1; Delta = -17.1; 95% CI, -11 to -23.2), and personality (conscientiousness, 6.5 to 6.8; Delta = 0.3; 95% CI, 0.1 to 5 and emotional stability, 6.1 to 6.6; Delta = 0.5; 95% CI, 0.3 to 0.7). Improvements in mindfulness were correlated with improvements in total mood disturbance (r = -0.39, P < .001), perspective taking subscale of physician empathy (r = 0.31, P < .001), burnout (emotional exhaustion and personal accomplishment subscales, r = -0.32 and 0.33, respectively; P < .001), and personality factors (conscientiousness and emotional stability, r = 0.29 and 0.25, respectively; P < .001).\n\n\nCONCLUSIONS\nParticipation in a mindful communication program was associated with short-term and sustained improvements in well-being and attitudes associated with patient-centered care. Because before-and-after designs limit inferences about intervention effects, these findings warrant randomized trials involving a variety of practicing physicians.",
"title": ""
},
{
"docid": "neg:1840464_17",
"text": "Blockchain technology like Bitcoin is a rapidly growing field of research which has found a wide array of applications. However, the power consumption of the mining process in the Bitcoin blockchain alone is estimated to be at least as high as the electricity consumption of Ireland which constitutes a serious liability to the widespread adoption of blockchain technology. We propose a novel instantiation of a proof of human-work which is a cryptographic proof that an amount of human work has been exercised, and show its use in the mining process of a blockchain. Next to our instantiation there is only one other instantiation known which relies on indistinguishability obfuscation, a cryptographic primitive whose existence is only conjectured. In contrast, our construction is based on the cryptographic principle of multiparty computation (which we use in a black box manner) and thus is the first known feasible proof of human-work scheme. Our blockchain mining algorithm called uMine, can be regarded as an alternative energy-efficient approach to mining.",
"title": ""
},
{
"docid": "neg:1840464_18",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "neg:1840464_19",
"text": "This paper presents a machine learning based handover management scheme for LTE to improve the Quality of Experience (QoE) of the user in the presence of obstacles. We show that, in this scenario, a state-of-the-art handover algorithm is unable to select the appropriate target cell for handover, since it always selects the target cell with the strongest signal without taking into account the perceived QoE of the user after the handover. In contrast, our scheme learns from past experience how the QoE of the user is affected when the handover was done to a certain eNB. Our performance evaluation shows that the proposed scheme substantially improves the number of completed downloads and the average download time compared to state-of-the-art. Furthermore, its performance is close to an optimal approach in the coverage region affected by an obstacle.",
"title": ""
}
] |
1840465 | Friending your way up the ladder: Connecting massive multiplayer online game behaviors with offline leadership | [
{
"docid": "pos:1840465_0",
"text": "Playing online games is experience-oriented but few studies have explored the user’s initial (trial) reaction to game playing and how this further influences a player’s behavior. Drawing upon the Uses and Gratifications theory, we investigated players’ multiple gratifications for playing (i.e. achievement, enjoyment and social interaction) and their experience with the service mechanisms offered after they had played an online game. This study explores the important antecedents of players’ proactive ‘‘stickiness” to a specific online game and examines the relationships among these antecedents. The results show that both the gratifications and service mechanisms significantly affect a player’s continued motivation to play, which is crucial to a player’s proactive stickiness to an online game. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840465_0",
"text": "Realization of Randomness had always been a controversial concept with great importance both from theoretical and practical Perspectives. This realization has been revolutionized in the light of recent studies especially in the realms of Chaos Theory, Algorithmic Information Theory and Emergent behavior in complex systems. We briefly discuss different definitions of Randomness and also different methods for generating it. The connection between all these approaches and the notion of Normality as the necessary condition of being unpredictable would be discussed. Then a complex-system-based Random Number Generator would be introduced. We will analyze its paradoxical features (Conservative Nature and reversibility in spite of having considerable variation) by using information theoretic measures in connection with other measures. The evolution of this Random Generator is equivalent to the evolution of its probabilistic description in terms of probability distribution over blocks of different lengths. By getting the aid of simulations we will show the ability of this system to preserve normality during the process of coarse graining. Keywords—Random number generators; entropy; correlation information; elementary cellular automata; reversibility",
"title": ""
},
{
"docid": "neg:1840465_1",
"text": "In this paper we introduce a new, high-quality, dataset of images containing fruits. We also present the results of some numerical experiment for training a neural network to detect fruits. We discuss the reason why we chose to use fruits in this project by proposing a few applications that could use such classifier.",
"title": ""
},
{
"docid": "neg:1840465_2",
"text": "Slave servo clocks have an essential role in hardware and software synchronization techniques based on Precision Time Protocol (PTP). The objective of servo clocks is to remove the drift between slave and master nodes, while keeping the output timing jitter within given uncertainty boundaries. Up to now, no univocal criteria exist for servo clock design. In fact, the relationship between controller design, performances and uncertainty sources is quite evanescent. In this paper, we propose a quite simple, but exhaustive linear model, which is expected to be used in the design of enhanced servo clock architectures.",
"title": ""
},
{
"docid": "neg:1840465_3",
"text": "Expanding access to financial services holds the promise to help reduce poverty and spur economic development. But, as a practical matter, commercial banks have faced challenges expanding access to poor and low-income households in developing economies, and nonprofits have had limited reach. We review recent innovations that are improving the quantity and quality of financial access. They are taking possibilities well beyond early models centered on providing “microcredit” for small business investment. We focus on new credit mechanisms and devices that help households manage cash flows, save, and cope with risk. Our eye is on contract designs, product innovations, regulatory policy, and ultimately economic and social impacts. We relate the innovations and empirical evidence to theoretical ideas, drawing links in particular to new work in behavioral economics and to randomized evaluation methods.",
"title": ""
},
{
"docid": "neg:1840465_4",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "neg:1840465_5",
"text": "Access to large, diverse RGB-D datasets is critical for training RGB-D scene understanding algorithms. However, existing datasets still cover only a limited number of views or a restricted scale of spaces. In this paper, we introduce Matterport3D, a large-scale RGB-D dataset containing 10,800 panoramic views from 194,400 RGB-D images of 90 building-scale scenes. Annotations are provided with surface reconstructions, camera poses, and 2D and 3D semantic segmentations. The precise global alignment and comprehensive, diverse panoramic set of views over entire buildings enable a variety of supervised and self-supervised computer vision tasks, including keypoint matching, view overlap prediction, normal prediction from color, semantic segmentation, and region classification.",
"title": ""
},
{
"docid": "neg:1840465_6",
"text": "Online forums contain huge amounts of valuable user-generated content. In current forum systems, users have to passively wait for other users to visit the forum systems and read/answer their questions. The user experience for question answering suffers from this arrangement. In this paper, we address the problem of \"pushing\" the right questions to the right persons, the objective being to obtain quick, high-quality answers, thus improving user satisfaction. We propose a framework for the efficient and effective routing of a given question to the top-k potential experts (users) in a forum, by utilizing both the content and structures of the forum system. First, we compute the expertise of users according to the content of the forum system—-this is to estimate the probability of a user being an expert for a given question based on the previous question answering of the user. Specifically, we design three models for this task, including a profile-based model, a thread-based model, and a cluster-based model. Second, we re-rank the user expertise measured in probability by utilizing the structural relations among users in a forum system. The results of the two steps can be integrated naturally in a probabilistic model that computes a final ranking score for each user. Experimental results show that the proposals are very promising.",
"title": ""
},
{
"docid": "neg:1840465_7",
"text": "The software QBlade under General Public License is used for analysis and design of wind turbines. QBlade uses the Blade Element Momentum (BEM) method for the simulation of wind turbines and it is integrated with the XFOIL airfoil design and analysis. It is possible to predict wind turbine performance with it. Nowadays, Computational Fluid Dynamics (CFD) is used for optimization and design of turbine application. In this study, Horizontal wind turbine with a rotor diameter of 2 m, was designed and objected to performance analysis by QBlade and Ansys-Fluent. The graphic of the power coefficient vs. tip speed ratio (TSR) was obtained for each result. When the results are compared, the good agreement has been seen.",
"title": ""
},
{
"docid": "neg:1840465_8",
"text": "In this work, CoMoO4@NiMoO4·xH2O core-shell heterostructure electrode is directly grown on carbon fabric (CF) via a feasible hydrothermal procedure with CoMoO4 nanowires (NWs) as the core and NiMoO4 nanosheets (NSs) as the shell. This core-shell heterostructure could provide fast ion and electron transfer, a large number of active sites, and good strain accommodation. As a result, the CoMoO4@NiMoO4·xH2O electrode yields high-capacitance performance with a high specific capacitance of 1582 F g-1, good cycling stability with the capacitance retention of 97.1% after 3000 cycles and good rate capability. The electrode also shows excellent mechanical flexibility. Also, a flexible Fe2O3 nanorods/CF electrode with enhanced electrochemical performance was prepared. A solid-state asymmetric supercapacitor device is successfully fabricated by using flexible CoMoO4@NiMoO4·xH2O as the positive electrode and Fe2O3 as the negative electrode. The asymmetric supercapacitor with a maximum voltage of 1.6 V demonstrates high specific energy (41.8 Wh kg-1 at 700 W kg-1), high power density (12000 W kg-1 at 26.7 Wh kg-1), and excellent cycle ability with the capacitance retention of 89.3% after 5000 cycles (at the current density of 3A g-1).",
"title": ""
},
{
"docid": "neg:1840465_9",
"text": "Paraphrase patterns are semantically equivalent patterns, which are useful in both paraphrase recognition and generation. This paper presents a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the paraphrase patterns in English are extracted using the patterns in another language as pivots. We make use of log-linear models for computing the paraphrase likelihood between pattern pairs and exploit feature functions based on maximum likelihood estimation (MLE), lexical weighting (LW), and monolingual word alignment (MWA). Using the presented method, we extract more than 1 million pairs of paraphrase patterns from about 2 million pairs of bilingual parallel sentences. The precision of the extracted paraphrase patterns is above 78%. Experimental results show that the presented method significantly outperforms a well-known method called discovery of inference rules from text (DIRT). Additionally, the log-linear model with the proposed feature functions are effective. The extracted paraphrase patterns are fully analyzed. Especially, we found that the extracted paraphrase patterns can be classified into five types, which are useful in multiple natural language processing (NLP) applications.",
"title": ""
},
{
"docid": "neg:1840465_10",
"text": "In the past year, convolutional neural networks have been shown to perform extremely well for stereo estimation. However, current architectures rely on siamese networks which exploit concatenation followed by further processing layers, requiring a minute of GPU computation per image pair. In contrast, in this paper we propose a matching network which is able to produce very accurate results in less than a second of GPU computation. Towards this goal, we exploit a product layer which simply computes the inner product between the two representations of a siamese architecture. We train our network by treating the problem as multi-class classification, where the classes are all possible disparities. This allows us to get calibrated scores, which result in much better matching performance when compared to existing approaches.",
"title": ""
},
{
"docid": "neg:1840465_11",
"text": "There has been an increasing interest in the applications of polarimctric n~icrowavc radiometers for ocean wind remote sensing. Aircraft and spaceborne radiometers have found significant wind direction signals in sea surface brightness temperatures, in addition to their sensitivities on wind speeds. However, it is not yet understood what physical scattering mechanisms produce the observed wind direction dependence. To this encl, polari]nctric microwave emissions from wind-generated sea surfaces are investigated with a polarimctric two-scale scattering model of sea surfaces, which relates the directional wind-wave spectrum to passive microwave signatures of sea surfaces. T)leoretical azimuthal modulations are found to agree well with experimental observations foI all Stokes paranletcrs from nearnadir to 65° incidence angles. The up/downwind asymmetries of brightness temperatures are interpreted usiIlg the hydrodynamic modulation. The contributions of Bragg scattering by short waves, geometric optics scattering by long waves and sea foam are examined. The geometric optics scattering mechanism underestimates the directicmal signals in the first three Stokes paranletcrs, and most importantly it predicts no signals in the fourth Stokes parameter (V), in disagreement with experimental datfi. In contrast, the Bragg scattering and and contributes to most of the wind direction signals from the two-scale model correctly predicts the phase changes of tl}e up/crosswind asymmetries in 7j U from middle to high incidence angles. The accuracy of the Bragg scattering theory for radiometric emission from water ripples is corroborated by the numerical Monte Carlo simulation of rough surface scattering. ‘I’his theoretical interpretation indicates the potential use of ]Jolarimctric brightness temperatures for retrieving the directional wave spectrum of capillary waves.",
"title": ""
},
{
"docid": "neg:1840465_12",
"text": "While the RGB2GRAY conversion with fixed parameters is a classical and widely used tool for image decolorization, recent studies showed that adapting weighting parameters in a two-order multivariance polynomial model has great potential to improve the conversion ability. In this paper, by viewing the two-order model as the sum of three subspaces, it is observed that the first subspace in the two-order model has the dominating importance and the second and the third subspace can be seen as refinement. Therefore, we present a semiparametric strategy to take advantage of both the RGB2GRAY and the two-order models. In the proposed method, the RGB2GRAY result on the first subspace is treated as an immediate grayed image, and then the parameters in the second and the third subspace are optimized. Experimental results show that the proposed approach is comparable to other state-of-the-art algorithms in both quantitative evaluation and visual quality, especially for images with abundant colors and patterns. This algorithm also exhibits good resistance to noise. In addition, instead of the color contrast preserving ratio using the first-order gradient for decolorization quality metric, the color contrast correlation preserving ratio utilizing the second-order gradient is calculated as a new perceptual quality metric.",
"title": ""
},
{
"docid": "neg:1840465_13",
"text": "We introduce a fully differentiable approximation to higher-order inference for coreference resolution. Our approach uses the antecedent distribution from a span-ranking architecture as an attention mechanism to iteratively refine span representations. This enables the model to softly consider multiple hops in the predicted clusters. To alleviate the computational cost of this iterative process, we introduce a coarse-to-fine approach that incorporates a less accurate but more efficient bilinear factor, enabling more aggressive pruning without hurting accuracy. Compared to the existing state-of-the-art span-ranking approach, our model significantly improves accuracy on the English OntoNotes benchmark, while being far more computationally efficient.",
"title": ""
},
{
"docid": "neg:1840465_14",
"text": "Tracking human vital signs of breathing and heart rates during sleep is important as it can help to assess the general physical health of a person and provide useful clues for diagnosing possible diseases. Traditional approaches (e.g., Polysomnography (PSG)) are limited to clinic usage. Recent radio frequency (RF) based approaches require specialized devices or dedicated wireless sensors and are only able to track breathing rate. In this work, we propose to track the vital signs of both breathing rate and heart rate during sleep by using off-the-shelf WiFi without any wearable or dedicated devices. Our system re-uses existing WiFi network and exploits the fine-grained channel information to capture the minute movements caused by breathing and heart beats. Our system thus has the potential to be widely deployed and perform continuous long-term monitoring. The developed algorithm makes use of the channel information in both time and frequency domain to estimate breathing and heart rates, and it works well when either individual or two persons are in bed. Our extensive experiments demonstrate that our system can accurately capture vital signs during sleep under realistic settings, and achieve comparable or even better performance comparing to traditional and existing approaches, which is a strong indication of providing non-invasive, continuous fine-grained vital signs monitoring without any additional cost.",
"title": ""
},
{
"docid": "neg:1840465_15",
"text": "We present a baseline convolutional neural network (CNN) structure and image preprocessing methodology to improve facial expression recognition algorithm using CNN. To analyze the most efficient network structure, we investigated four network structures that are known to show good performance in facial expression recognition. Moreover, we also investigated the effect of input image preprocessing methods. Five types of data input (raw, histogram equalization, isotropic smoothing, diffusion-based normalization, difference of Gaussian) were tested, and the accuracy was compared. We trained 20 different CNN models (4 networks × 5 data input types) and verified the performance of each network with test images from five different databases. The experiment result showed that a three-layer structure consisting of a simple convolutional and a max pooling layer with histogram equalization image input was the most efficient. We describe the detailed training procedure and analyze the result of the test accuracy based on considerable observation.",
"title": ""
},
{
"docid": "neg:1840465_16",
"text": "We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.",
"title": ""
},
{
"docid": "neg:1840465_17",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
},
{
"docid": "neg:1840465_18",
"text": "The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive study on the security of 5G wireless network systems compared with the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services are summarized with the consideration of new service requirements and new use cases in 5G wireless networks. The recent development and the existing schemes for the 5G wireless security are presented based on the corresponding security services, including authentication, availability, data confidentiality, key management, and privacy. This paper further discusses the new security features involving different technologies applied to 5G, such as heterogeneous networks, device-to-device communications, massive multiple-input multiple-output, software-defined networks, and Internet of Things. Motivated by these security research and development activities, we propose a new 5G wireless security architecture, based on which the analysis of identity management and flexible authentication is provided. As a case study, we explore a handover procedure as well as a signaling load scheme to show the advantages of the proposed security architecture. The challenges and future directions of 5G wireless security are finally summarized.",
"title": ""
},
{
"docid": "neg:1840465_19",
"text": "The dominant approach for many NLP tasks are recurrent neura l networks, in particular LSTMs, and convolutional neural networks. However , these architectures are rather shallow in comparison to the deep convolutional n etworks which are very successful in computer vision. We present a new archite ctur for text processing which operates directly on the character level and uses o nly small convolutions and pooling operations. We are able to show that the performa nce of this model increases with the depth: using up to 29 convolutional layer s, we report significant improvements over the state-of-the-art on several public t ext classification tasks. To the best of our knowledge, this is the first time that very de ep convolutional nets have been applied to NLP.",
"title": ""
}
] |
1840466 | A dual-band unidirectional coplanar antenna for 2.4–5-GHz wireless applications | [
{
"docid": "pos:1840466_0",
"text": "By embedding shorting vias, a dual-feed and dual-band L-probe patch antenna, with flexible frequency ratio and relatively small lateral size, is proposed. Dual resonant frequency bands are produced by two radiating patches located in different layers, with the lower patch supported by shorting vias. The measured impedance bandwidths, determined by 10 dB return loss, of the two operating bands reach 26.6% and 42.2%, respectively. Also the radiation patterns are stable over both operating bands. Simulation results are compared well with experiments. This antenna is highly suitable to be used as a base station antenna for multiband operation.",
"title": ""
},
{
"docid": "pos:1840466_1",
"text": "A novel uni-planar dual-band monopole antenna capable of generating two wide bands for 2.4/5 GHz WLAN operation is presented. The antenna has a simple structure consisting of a driven strip and a coupled shorted strip. The antenna occupies a small area of 6 times 20 mm2 on an FR4 substrate. The small area allows the antenna to be easily employed in the narrow space between the top edge of the display panel and the casing of the laptop computer to operate as an internal antenna. It is believed that the size of the antenna is about the smallest among the existing uni-planar internal laptop antennas for 2.4/5 GHz WLAN operation.",
"title": ""
}
] | [
{
"docid": "neg:1840466_0",
"text": "General-purpose knowledge bases are increasingly growing in terms of depth (content) and width (coverage). Moreover, algorithms for entity linking and entity retrieval have improved tremendously in the past years. These developments give rise to a new line of research that exploits and combines these developments for the purposes of text-centric information retrieval applications. This tutorial focuses on a) how to retrieve a set of entities for an ad-hoc query, or more broadly, assessing relevance of KB elements for the information need, b) how to annotate text with such elements, and c) how to use this information to assess the relevance of text. We discuss different kinds of information available in a knowledge graph and how to leverage each most effectively.\n We start the tutorial with a brief overview of different types of knowledge bases, their structure and information contained in popular general-purpose and domain-specific knowledge bases. In particular, we focus on the representation of entity-centric information in the knowledge base through names, terms, relations, and type taxonomies. Next, we will provide a recap on ad-hoc object retrieval from knowledge graphs as well as entity linking and retrieval. This is essential technology, which the remainder of the tutorial builds on. Next we will cover essential components within successful entity linking systems, including the collection of entity name information and techniques for disambiguation with contextual entity mentions. We will present the details of four previously proposed systems that successfully leverage knowledge bases to improve ad-hoc document retrieval. These systems combine the notion of entity retrieval and semantic search on one hand, with text retrieval models and entity linking on the other. Finally, we also touch on entity aspects and links in the knowledge graph as it can help to understand the entities' context.\n This tutorial is the first to compile, summarize, and disseminate progress in this emerging area and we provide both an overview of state-of-the-art methods and outline open research problems to encourage new contributions.",
"title": ""
},
{
"docid": "neg:1840466_1",
"text": "INTRODUCTION\nA \"giant\" lipoma is defined as a tumor having dimensions greater than 10 cm. Giant lipomas are rare and giant breast lipomas are exceptionally uncommon. Only six cases have been described in world literature till date. Herein we describe a case of giant breast lipoma and discuss its surgical management.\n\n\nCASE REPORT\nA 43-year-old lady presented with left sided unilateral gigantomastia. Clinical examination, radiology and histopathology diagnosed lipoma. Excision of the tumor was planned, together with correction of the breast deformity by reduction mammoplasty using McKissok technique. A tumor measuring 19 cm × 16 cm × 10 cm and weighing 1647 grams was removed. The nipple areola complex was set by infolding of the vertical pedicles and the lateral and medial flaps were approximated to create the final breast contour. The patient is doing well on follow up.\n\n\nDISCUSSION\nGiant lipomas are rare and of them, giant breast lipomas are extremely uncommon. They can grow to immense proportions and cause significant aesthetic and functional problems. The treatment is excision. But reconstruction of the breast is almost always necessary to achieve a symmetric breast in terms of volume, shape, projection and nipple areola complex symmetry compared to the normal opposite breast. Few authors have used various mammoplasty techniques for reconstruction of the breast after giant lipoma excision. Our case has the following unique features: (i) It is the third largest breast lipoma described in the literature till date, weighing 1647 grams; (ii) The Mckissock technique has been used for parenchymal reshaping which has not been previously described for giant breast lipoma.\n\n\nCONCLUSION\nThis case demonstrates that reduction mammoplasty after giant lipoma removal is highly rewarding, resulting in a smaller-sized breast that is aesthetically more pleasing, has better symmetry with the contralateral breast, and provides relief from functional mass deficit.",
"title": ""
},
{
"docid": "neg:1840466_2",
"text": "Recent advances in the statistical theory of hierarchical linear models should enable important breakthroughs in the measurement of psychological change and the study of correlates of change. A two-stage model of change is proposed here. At the first, or within-subject stage, an individual's status on some trait is modeled as a function of an individual growth trajectory plus random error. At the second, or between-subjects stage, the parameters of the individual growth trajectories vary as a function of differences between subjects in background characteristics, instructional experiences, and possibly experimental treatments. This two-stage conceptualization, illustrated with data on Head Start children, allows investigators to model individual change, predict future development, assess the quality of measurement instruments for distinguishing among growth trajectories, and to study systematic variation in growth trajectories as a function of background characteristics and experimental treatments.",
"title": ""
},
{
"docid": "neg:1840466_3",
"text": "Studies of human addicts and behavioural studies in rodent models of addiction indicate that key behavioural abnormalities associated with addiction are extremely long lived. So, chronic drug exposure causes stable changes in the brain at the molecular and cellular levels that underlie these behavioural abnormalities. There has been considerable progress in identifying the mechanisms that contribute to long-lived neural and behavioural plasticity related to addiction, including drug-induced changes in gene transcription, in RNA and protein processing, and in synaptic structure. Although the specific changes identified so far are not sufficiently long lasting to account for the nearly permanent changes in behaviour associated with addiction, recent work has pointed to the types of mechanism that could be involved.",
"title": ""
},
{
"docid": "neg:1840466_4",
"text": "Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.",
"title": ""
},
{
"docid": "neg:1840466_5",
"text": "We present the results from the third shared task on multimodal machine translation. In this task a source sentence in English is supplemented by an image and participating systems are required to generate a translation for such a sentence into German, French or Czech. The image can be used in addition to (or instead of) the source sentence. This year the task was extended with a third target language (Czech) and a new test set. In addition, a variant of this task was introduced with its own test set where the source sentence is given in multiple languages: English, French and German, and participating systems are required to generate a translation in Czech. Seven teams submitted 45 different systems to the two variants of the task. Compared to last year, the performance of the multimodal submissions improved, but text-only systems remain competitive.",
"title": ""
},
{
"docid": "neg:1840466_6",
"text": "In this paper, we propose a rotated bounding box based convolutional neural network (RBox-CNN) for arbitrary-oriented ship detection. RBox-CNN is an end-to-end model based on Faster R-CNN. The region proposal network generates proposals as the rotated bounding box, and then the rotation region-of-interest (RRoI) pooling layer is applied to extract region features corresponding the proposals. In addition, the diagonal region-of-interest (DRoI) pooling layer is applied simultaneously to extract context features and alleviate the problem of misalignment in RRoI pooling layer. To stably predict locations with the angle, we apply the regression of distance's projection in width/height. Experiments on HRSC2016 show that our model achieves state-of-the-art detection accuracy on ship detection. Furthermore, RBox-CNN achieves a significant improvement on DOTA for oriented general object detection in remote sensing images.",
"title": ""
},
{
"docid": "neg:1840466_7",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "neg:1840466_8",
"text": "Firewalls are core elements in network security. However, managing firewall rules, especially for enterprise networks, has become complex and error-prone. Firewall filtering rules have to be carefully written and organized in order to correctly implement the security policy. In addition, inserting or modifying a filtering rule requires thorough analysis of the relationship between this rule and other rules in order to determine the proper order of this rule and commit the updates. In this paper we present a set of techniques and algorithms that provide automatic discovery of firewall policy anomalies to reveal rule conflicts and potential problems in legacy firewalls, and anomaly-free policy editing for rule insertion, removal, and modification. This is implemented in a user-friendly tool called ¿Firewall Policy Advisor.¿ The Firewall Policy Advisor significantly simplifies the management of any generic firewall policy written as filtering rules, while minimizing network vulnerability due to firewall rule misconfiguration.",
"title": ""
},
{
"docid": "neg:1840466_9",
"text": "The design of electric vehicles require a complete paradigm shift in terms of embedded systems architectures and software design techniques that are followed within the conventional automotive systems domain. It is increasingly being realized that the evolutionary approach of replacing the engine of a car by an electric engine will not be able to address issues like acceptable vehicle range, battery lifetime performance, battery management techniques, costs and weight, which are the core issues for the success of electric vehicles. While battery technology has crucial importance in the domain of electric vehicles, how these batteries are used and managed pose new problems in the area of embedded systems architecture and software for electric vehicles. At the same time, the communication and computation design challenges in electric vehicles also have to be addressed appropriately. This paper discusses some of these research challenges.",
"title": ""
},
{
"docid": "neg:1840466_10",
"text": "Fingertip suction is investigated using a compliant, underactuated, tendon-driven hand designed for underwater mobile manipulation. Tendon routing and joint stiffnesses are designed to provide ease of closure while maintaining finger rigidity, allowing the hand to pinch small objects, as well as secure large objects, without diminishing strength. While the hand is designed to grasp a range of objects, the addition of light suction flow to the fingertips is especially effective for small, low-friction (slippery) objects. Numerical simulations confirm that changing suction parameters can increase the object acquisition region, providing guidelines for future versions of the hand.",
"title": ""
},
{
"docid": "neg:1840466_11",
"text": "Ontology Learning greatly facilitates the construction of ontologies by the ontology engineer. The notion of ontology learning that we propose here includes a number of complementary disciplines that feed on different types of unstructured and semi-structured data in order to support a semi-automatic, cooperative ontology engineering process. Our ontology learning framework proceeds through ontology import, extraction, pruning, and refinement, giving the ontology engineer a wealth of coordinated tools for ontology modelling. Besides of the general architecture, we show in this paper some exemplary techniques in the ontology learning cycle that we have implemented in our ontology learning environment, KAON Text-To-Onto.",
"title": ""
},
{
"docid": "neg:1840466_12",
"text": "Model neurons composed of hundreds of compartments are currently used for studying phenomena at the level of the single cell. Large network simulations require a simplified model of a single neuron that retains the electrotonic and synaptic integrative properties of the real cell. We introduce a method for reducing the number of compartments of neocortical pyramidal neuron models (from 400 to 8-9 compartments) through a simple collapsing method based on conserving the axial resistance rather than on the surface area of the dendritic tree. The reduced models retain the general morphology of the pyramidal cells on which they are based, allowing accurate positioning of synaptic inputs and ionic conductances on individual model cells, as well as construction of spatially accurate network models. The reduced models run significantly faster than the full models, yet faithfully reproduce their electrical responses.",
"title": ""
},
{
"docid": "neg:1840466_13",
"text": "This paper presents a novel dataset for training end-to-end task oriented conversational agents. The dataset contains conversations between an operator – a task expert, and a client who seeks information about the task. Along with the conversation transcriptions, we record database API calls performed by the operator, which capture a distilled meaning of the user query. We expect that the easy-to-get supervision of database calls will allow us to train end-to-end dialogue agents with significantly less training data. The dataset is collected using crowdsourcing and the conversations cover the well-known restaurant domain. Quality of the data is enforced by mutual control among contributors. The dataset is available for download under the Creative Commons 4.0 BY-SA license.",
"title": ""
},
{
"docid": "neg:1840466_14",
"text": "Automotive embedded applications like the engine management system are composed of multiple functional components that are tightly coupled via numerous communication dependencies and intensive data sharing, while also having real-time requirements. In order to cope with complexity, especially in multi-core settings, various communication mechanisms are used to ensure data consistency and temporal determinism along functional cause-effect chains. However, existing timing analysis methods generally only support very basic communication models that need to be extended to handle the analysis of industry grade problems which involve more complex communication semantics. In this work, we give an overview of communication semantics used in the automotive industry and the different constraints to be considered in the design process. We also propose a method for model transformation to increase the expressiveness of current timing analysis methods enabling them to work with more complex communication semantics. We demonstrate this transformation approach for concrete implementations of two communication semantics, namely, implicit and LET communication. We discuss the impact on end-to-end latencies and communication overheads based on a full blown engine management system. 1998 ACM Subject Classification C.3 Real-Time and Embedded Systems, D.4.4 Communications Management",
"title": ""
},
{
"docid": "neg:1840466_15",
"text": "Lane classification is a fundamental problem for autonomous driving and map-aided localization. Many existing algorithms rely on special designed 1D or 2D filters to extract features of lane markings from either color images or LiDAR data. However, these handcrafted features could not be robust under various driving and lighting conditions.\n In this paper, we propose a novel algorithm to fuse color images and LiDAR data together. Our algorithm consists of two stages. In the first stage, we segment road surfaces and register LiDAR data with the corresponding color images. In the second stage, we train convolutional neural networks (CNNs) to classify image patches into lane markings and non-markings. Comparing with the algorithms based on handcrafted features, our algorithm learns a set of kernels to extract and integrate features from two different modalities. The pixel-level classification rate in our experiments shows that our algorithm is robust to different conditions such as shadows and occlusions.",
"title": ""
},
{
"docid": "neg:1840466_16",
"text": "Various orodispersible drug formulations have been recently introduced into the market. Oral lyophilisates and orodispersible granules, tablets or films have enriched the therapeutic options. In particular, the paediatric and geriatric population may profit from the advantages like convenient administration, lack of swallowing, ease of use. Until now, only a few novel products made it to the market as the development and production usually is more expensive than for conventional oral drug dosage forms like tablets or capsules. The review reports the recent advances, existing and upcoming products, and the significance of formulating patient-friendly oral dosage forms. The preparation of the medicines can be performed both in pharmaceutical industry and in community pharmacies. Recent advances, e.g. drug printing technologies, may facilitate this process for community or hospital pharmacies. Still, regulatory guidelines and pharmacopoeial monographs lack appropriate methods, specifications and global harmonization to foster the development of innovative orodispersible drug dosage forms.",
"title": ""
},
{
"docid": "neg:1840466_17",
"text": "In this paper, we present a probabilistic multi-task learning approach for visual saliency estimation in video. In our approach, the problem of visual saliency estimation is modeled by simultaneously considering the stimulus-driven and task-related factors in a probabilistic framework. In this framework, a stimulus-driven component simulates the low-level processes in human vision system using multi-scale wavelet decomposition and unbiased feature competition; while a task-related component simulates the high-level processes to bias the competition of the input features. Different from existing approaches, we propose a multi-task learning algorithm to learn the task-related “stimulus-saliency” mapping functions for each scene. The algorithm also learns various fusion strategies, which are used to integrate the stimulus-driven and task-related components to obtain the visual saliency. Extensive experiments were carried out on two public eye-fixation datasets and one regional saliency dataset. Experimental results show that our approach outperforms eight state-of-the-art approaches remarkably.",
"title": ""
},
{
"docid": "neg:1840466_18",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
}
] |
1840467 | Train Model using Network Architecture and Log Records Trained Architecture N 2 . Predict Performance of Untrained Architecture | [
{
"docid": "pos:1840467_0",
"text": "We focus on knowledge base construction (KBC) from richly formatted data. In contrast to KBC from text or tabular data, KBC from richly formatted data aims to extract relations conveyed jointly via textual, structural, tabular, and visual expressions. We introduce Fonduer, a machine-learning-based KBC system for richly formatted data. Fonduer presents a new data model that accounts for three challenging characteristics of richly formatted data: (1) prevalent document-level relations, (2) multimodality, and (3) data variety. Fonduer uses a new deep-learning model to automatically capture the representation (i.e., features) needed to learn how to extract relations from richly formatted data. Finally, Fonduer provides a new programming model that enables users to convert domain expertise, based on multiple modalities of information, to meaningful signals of supervision for training a KBC system. Fonduer-based KBC systems are in production for a range of use cases, including at a major online retailer. We compare Fonduer against state-of-the-art KBC approaches in four different domains. We show that Fonduer achieves an average improvement of 41 F1 points on the quality of the output knowledge base---and in some cases produces up to 1.87x the number of correct entries---compared to expert-curated public knowledge bases. We also conduct a user study to assess the usability of Fonduer's new programming model. We show that after using Fonduer for only 30 minutes, non-domain experts are able to design KBC systems that achieve on average 23 F1 points higher quality than traditional machine-learning-based KBC approaches.",
"title": ""
},
{
"docid": "pos:1840467_1",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "pos:1840467_2",
"text": "We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human experts, and an expressive search space that supports complex topologies. Our algorithm efficiently discovers architectures that outperform a large number of manually designed models for image classification, obtaining top-1 error of 3.6% on CIFAR-10 and 20.3% when transferred to ImageNet, which is competitive with the best existing neural architecture search approaches. We also present results using random search, achieving 0.3% less top-1 accuracy on CIFAR-10 and 0.1% less on ImageNet whilst reducing the search time from 36 hours down to 1 hour.",
"title": ""
}
] | [
{
"docid": "neg:1840467_0",
"text": "The original ImageNet dataset is a popular large-scale benchmark for training Deep Neural Networks. Since the cost of performing experiments (e.g, algorithm design, architecture search, and hyperparameter tuning) on the original dataset might be prohibitive, we propose to consider a downsampled version of ImageNet. In contrast to the CIFAR datasets and earlier downsampled versions of ImageNet, our proposed ImageNet32x32 (and its variants ImageNet64x64 and ImageNet16x16) contains exactly the same number of classes and images as ImageNet, with the only difference that the images are downsampled to 32×32 pixels per image (64×64 and 16×16 pixels for the variants, respectively). Experiments on these downsampled variants are dramatically faster than on the original ImageNet and the characteristics of the downsampled datasets with respect to optimal hyperparameters appear to remain similar. The proposed datasets and scripts to reproduce our results are available at http://image-net.org/download-images and https://github.com/PatrykChrabaszcz/Imagenet32_Scripts",
"title": ""
},
{
"docid": "neg:1840467_1",
"text": "In grasping, shape adaptation between hand and object has a major influence on grasp success. In this paper, we present an approach to grasping unknown objects that explicitly considers the effect of shape adaptability to simplify perception. Shape adaptation also occurs between the hand and the environment, for example, when fingers slide across the surface of the table to pick up a small object. Our approach to grasping also considers environmental shape adaptability to select grasps with high probability of success. We validate the proposed shape-adaptability-aware grasping approach in 880 real-world grasping trials with 30 objects. Our experiments show that the explicit consideration of shape adaptability of the hand leads to robust grasping of unknown objects. Simple perception suffices to achieve this robust grasping behavior.",
"title": ""
},
{
"docid": "neg:1840467_2",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "neg:1840467_3",
"text": "In this paper, a simple single-phase grid-connected photovoltaic (PV) inverter topology consisting of a boost section, a low-voltage single-phase inverter with an inductive filter, and a step-up transformer interfacing the grid is considered. Ideally, this topology will not inject any lower order harmonics into the grid due to high-frequency pulse width modulation operation. However, the nonideal factors in the system such as core saturation-induced distorted magnetizing current of the transformer and the dead time of the inverter, etc., contribute to a significant amount of lower order harmonics in the grid current. A novel design of inverter current control that mitigates lower order harmonics is presented in this paper. An adaptive harmonic compensation technique and its design are proposed for the lower order harmonic compensation. In addition, a proportional-resonant-integral (PRI) controller and its design are also proposed. This controller eliminates the dc component in the control system, which introduces even harmonics in the grid current in the topology considered. The dynamics of the system due to the interaction between the PRI controller and the adaptive compensation scheme is also analyzed. The complete design has been validated with experimental results and good agreement with theoretical analysis of the overall system is observed.",
"title": ""
},
{
"docid": "neg:1840467_4",
"text": "Deep neural networks (DNNs) are vulnerable to adversarial examples, perturbations to correctly classified examples which can cause the model to misclassify. In the image domain, these perturbations are often virtually indistinguishable to human perception, causing humans and state-of-the-art models to disagree. However, in the natural language domain, small perturbations are clearly perceptible, and the replacement of a single word can drastically alter the semantics of the document. Given these challenges, we use a black-box population-based optimization algorithm to generate semantically and syntactically similar adversarial examples that fool well-trained sentiment analysis and textual entailment models with success rates of 97% and 70%, respectively. We additionally demonstrate that 92.3% of the successful sentiment analysis adversarial examples are classified to their original label by 20 human annotators, and that the examples are perceptibly quite similar. Finally, we discuss an attempt to use adversarial training as a defense, but fail to yield improvement, demonstrating the strength and diversity of our adversarial examples. We hope our findings encourage researchers to pursue improving the robustness of DNNs in the natural language domain.",
"title": ""
},
{
"docid": "neg:1840467_5",
"text": "B-spline surfaces, although widely used, are incapable of describing surfaces of arbitrary topology. It is not possible to model a general closed surface or a surface with handles as a single non-degenerate B-spline. In practice such surfaces are often needed. In this paper, we present generalizations of biquadratic and bicubic B-spline surfaces that are capable of capturing surfaces of arbitrary topology (although restrictions are placed on the connectivity of the control mesh). These results are obtained by relaxing the sufficient but not necessary smoothness constraints imposed by B-splines and through the use of an n-sided generalization of Bézier surfaces called S-patches.",
"title": ""
},
{
"docid": "neg:1840467_6",
"text": "This paper presents a novel secondary frequency and voltage control method for islanded microgrids based on distributed cooperative control. The proposed method utilizes a sparse communication network where each DG unit only requires local and its neighbors’ information to perform control actions. The frequency controller restores the system frequency to the nominal value while maintaining the equal generation cost increment value among DG units. The voltage controller simultaneously achieves the critical bus voltage restoration and accurate reactive power sharing. Subsequently, the case when the DG unit ac-side voltage reaches its limit value is discussed and a controller output limitation method is correspondingly provided to selectively realize the desired control objective. This paper also provides a small-signal dynamic model of the microgrid with the proposed controller to evaluate the system dynamic performance. Finally, simulation results on a microgrid test system are presented to validate the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "neg:1840467_7",
"text": "Existing methods on sketch based image retrieval (SBIR) are usually based on the hand-crafted features whose ability of representation is limited. In this paper, we propose a sketch based image retrieval method via image-aided cross domain learning. First, the deep learning model is introduced to learn the discriminative features. However, it needs a large number of images to train the deep model, which is not suitable for the sketch images. Thus, we propose to extend the sketch training images via introducing the real images. Specifically, we initialize the deep models with extra image data, and then extract the generalized boundary from real images as the sketch approximation. The using of generalized boundary is under the assumption that their domain is similar with sketch domain. Finally, the neural network is fine-tuned with the sketch approximation data. Experimental results on Flicker15 show that the proposed method has a strong ability to link the associated image-sketch pairs and the results outperform state-of-the-arts methods.",
"title": ""
},
{
"docid": "neg:1840467_8",
"text": "Abstract A Security Operation Center (SOC) is made up of five distinct modules: event generators, event collectors, message database, analysis engines and reaction management software. The main problem encountered when building a SOC is the integration of all these modules, usually built as autonomous parts, while matching availability, integrity and security of data and their transmission channels. In this paper we will discuss the functional architecture needed to integrate those modules. Chapter one will introduce the concepts behind each module and briefly describe common problems encountered with each of them. In chapter two we will design the global architecture of the SOC. We will then focus on collection & analysis of data generated by sensors in chapters three and four. A short conclusion will describe further research & analysis to be performed in the field of SOC design.",
"title": ""
},
{
"docid": "neg:1840467_9",
"text": "Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.",
"title": ""
},
{
"docid": "neg:1840467_10",
"text": "This paper explains the rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems. The paper defines the core characteristics and design principles of reconfigurable manufacturing systems (RMS) and describes the structure recommended for practical RMS with RMS core characteristics. After that, a rigorous mathematical method is introduced for designing RMS with this recommended structure. An example is provided to demonstrate how this RMS design method is used. The paper concludes with a discussion of reconfigurable assembly systems. © 2011 The Society of Manufacturing Engineers. Published by Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840467_11",
"text": "The National Firearms Forensic Intelligence Database (NFFID (c) Crown Copyright 2003-2008) was developed by The Forensic Science Service (FSS) as an investigative tool for collating and comparing information from items submitted to the FSS to provide intelligence reports for the police and relevant government agencies. The purpose of these intelligence reports was to highlight current firearm and ammunition trends and their distribution within the country. This study reviews all the trends that have been highlighted by NFFID between September 2003 and September 2008. A total of 8887 guns of all types have been submitted to the FSS over the last 5 years, where an average of 21% of annual submissions are converted weapons. The makes, models, and modes of conversion of these weapons are described in detail. The number of trends identified by NFFID shows that this has been a valuable tool in the analysis of firearms-related crime.",
"title": ""
},
{
"docid": "neg:1840467_12",
"text": "The relationship between nonverbal behavior and severity of depression was investigated by following depressed participants over the course of treatment and video recording a series of clinical interviews. Facial expressions and head pose were analyzed from video using manual and automatic systems. Both systems were highly consistent for FACS action units (AUs) and showed similar effects for change over time in depression severity. When symptom severity was high, participants made fewer affiliative facial expressions (AUs 12 and 15) and more non-affiliative facial expressions (AU 14). Participants also exhibited diminished head motion (i.e., amplitude and velocity) when symptom severity was high. These results are consistent with the Social Withdrawal hypothesis: that depressed individuals use nonverbal behavior to maintain or increase interpersonal distance. As individuals recover, they send more signals indicating a willingness to affiliate. The finding that automatic facial expression analysis was both consistent with manual coding and revealed the same pattern of findings suggests that automatic facial expression analysis may be ready to relieve the burden of manual coding in behavioral and clinical science.",
"title": ""
},
{
"docid": "neg:1840467_13",
"text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.",
"title": ""
},
{
"docid": "neg:1840467_14",
"text": "HBase is a distributed column-oriented database built on top of HDFS. HBase is the Hadoop application to use when you require real-time read/write random access to very large datasets. HBase is a scalable data store targeted at random read and write access of (fairly-) structured data. It's modeled after Google's Big table and targeted to support large tables, on the order of billions of rows and millions of columns. It uses HDFS as the underlying file system and is designed to be fully distributed and highly available. Version 0.20 introduces significant performance improvement. Base's Table Input Format is designed to allow a Map Reduce program to operate on data stored in an HBase table. Table Output Format is for writing Map Reduce outputs into an HBase table. HBase has different storage characteristics than HDFS, such as the ability to do row updates and column indexing, so we can expect to see these features used by Hive in future releases. It is already possible to access HBase tables from Hive. This paper includes the step by step introduction to the HBase, Identify differences between apache HBase and a traditional RDBMS, The Problem with Relational Database Systems, Relation between the Hadoop and HBase, How an Apache HBase table is physically stored on disk. Later part of this paper introduces Map Reduce, HBase table and how Apache HBase Cells stores data, what happens to data when it is deleted. Last part explains difference between Big Data and HBase, Conclusion followed with the References.",
"title": ""
},
{
"docid": "neg:1840467_15",
"text": "In web search, relevance ranking of popular pages is relatively easy, because of the inclusion of strong signals such as anchor text and search log data. In contrast, with less popular pages, relevance ranking becomes very challenging due to a lack of information. In this paper the former is referred to as head pages, and the latter tail pages. We address the challenge by learning a model that can extract search-focused key n-grams from web pages, and using the key n-grams for searches of the pages, particularly, the tail pages. To the best of our knowledge, this problem has not been previously studied. Our approach has four characteristics. First, key n-grams are search-focused in the sense that they are defined as those which can compose \"good queries\" for searching the page. Second, key n-grams are learned in a relative sense using learning to rank techniques. Third, key n-grams are learned using search log data, such that the characteristics of key n-grams in the search log data, particularly in the heads; can be applied to the other data, particularly to the tails. Fourth, the extracted key n-grams are used as features of the relevance ranking model also trained with learning to rank techniques. Experiments validate the effectiveness of the proposed approach with large-scale web search datasets. The results show that our approach can significantly improve relevance ranking performance on both heads and tails; and particularly tails, compared with baseline approaches. Characteristics of our approach have also been fully investigated through comprehensive experiments.",
"title": ""
},
{
"docid": "neg:1840467_16",
"text": "This article addresses one of the key challenges of engaging a massive ad hoc crowd by providing sustainable incentives. The incentive model is based on a context-aware cyber-physical spatio-temporal serious game with the help of a mobile crowd sensing mechanism. To this end, this article describes a framework that can create an ad hoc social network of millions of people and provide context-aware serious-game services as an incentive. While interacting with different services, the massive crowd shares a rich trail of geo-tagged multimedia data, which acts as a crowdsourcing eco-system. The incentive model has been tested on the mass crowd at the Hajj since 2014. From our observations, we conclude that the framework provides a sustainable incentive mechanism that can solve many real-life problems such as reaching a person in a crowd within the shortest possible time, isolating significant events, finding lost individuals, handling emergency situations, helping pilgrims to perform ritual events based on location and time, and sharing geo-tagged multimedia resources among a community of interest within the crowd. The framework allows an ad hoc social network to be formed within a very large crowd, a community of interests to be created for each person, and information to be shared with the right community of interests. We present the communication paradigm of the framework, the serious game incentive model, and cloud-based massive geo-tagged social network architecture.",
"title": ""
},
{
"docid": "neg:1840467_17",
"text": "The design of a high-gain microstrip grid array antenna (MGAA) for 24-GHz automotive radar sensor applications is first presented. An amplitude tapering technique utilizing variable line width on the individual radiating element is then applied to lower sidelobe level. Next, the MGAA is simplified to a microstrip comb array antenna (MCAA). The MCAA shows broader impedance bandwidth and lower cross-polarization radiation as compared with those of the MGAA. The MCAA is designed not as a travelling-wave but a standing-wave antenna. As a result, the match load and the reflection-cancelling structure can be avoided, which is important, especially in the millimeter-wave frequencies. Finally, an emphasis is given to 45° linearly-polarized MCAA because the radiation with the orthogonal polarization from cars coming from the opposite direction does not affect the radar operation.",
"title": ""
},
{
"docid": "neg:1840467_18",
"text": "In order to promote the use of mushrooms as source of nutrients and nutraceuticals, several experiments were performed in wild and commercial species. The analysis of nutrients included determination of proteins, fats, ash, and carbohydrates, particularly sugars by HPLC-RI. The analysis of nutraceuticals included determination of fatty acids by GC-FID, and other phytochemicals such as tocopherols, by HPLC-fluorescence, and phenolics, flavonoids, carotenoids and ascorbic acid, by spectrophotometer techniques. The antimicrobial properties of the mushrooms were also screened against fungi, Gram positive and Gram negative bacteria. The wild mushroom species proved to be less energetic than the commercial sp., containing higher contents of protein and lower fat concentrations. In general, commercial species seem to have higher concentrations of sugars, while wild sp. contained lower values of MUFA but also higher contents of PUFA. alpha-Tocopherol was detected in higher amounts in the wild species, while gamma-tocopherol was not found in these species. Wild mushrooms revealed a higher content of phenols but a lower content of ascorbic acid, than commercial mushrooms. There were no differences between the antimicrobial properties of wild and commercial species. The ongoing research will lead to a new generation of foods, and will certainly promote their nutritional and medicinal use.",
"title": ""
},
{
"docid": "neg:1840467_19",
"text": "Traditional static spectrum allocation policies have been to grant each wireless service exclusive usage of certain frequency bands, leaving several spectrum bands unlicensed for industrial, scientific and medical purposes. The rapid proliferation of low-cost wireless applications in unlicensed spectrum bands has resulted in spectrum scarcity among those bands. Since most applications in Wireless Sensor Networks (WSNs) utilize the unlicensed spectrum, network-wide performance of WSNs will inevitably degrade as their popularity increases. Sharing of under-utilized licensed spectrum among unlicensed devices is a promising solution to the spectrum scarcity issue. Cognitive Radio (CR) is a new paradigm in wireless communication that allows sensor nodes as the unlicensed users or Secondary Users (SUs) to detect and use the under-utilized licensed spectrum temporarily. Given that the licensed or Primary Users (PUs) are oblivious to the presence of SUs, the SUs access the licensed spectrum opportunistically without interfering the PUs, while improving their own performance. In this paper, we propose an approach to build Cognitive Radio-based Wireless Sensor Networks (CR-WSNs). We believe that CR-WSN is the next-generation WSN. Realizing that both WSNs and CR present unique challenges to the design of CR-WSNs, we provide an overview and conceptual design of WSNs from the perspective of CR. The open issues are discussed to motivate new research interests in this field. We also present our method to achieving context-awareness and intelligence, which are the key components in CR networks, to address an open issue in CR-WSN.",
"title": ""
}
] |
1840468 | Orchestrating Caching, Transcoding and Request Routing for Adaptive Video Streaming Over ICN | [
{
"docid": "pos:1840468_0",
"text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.",
"title": ""
}
] | [
{
"docid": "neg:1840468_0",
"text": "We derive a numerical method for Darcy flow, hence also for Poisson’s equation in first order form, based on discrete exterior calculus (DEC). Exterior calculus is a generalization of vector calculus to smooth manifolds and DEC is its discretization on simplicial complexes such as triangle and tetrahedral meshes. We start by rewriting the governing equations of Darcy flow using the language of exterior calculus. This yields a formulation in terms of flux differential form and pressure. The numerical method is then derived by using the framework provided by DEC for discretizing differential forms and operators that act on forms. We also develop a discretization for spatially dependent Hodge star that varies with the permeability of the medium. This also allows us to address discontinuous permeability. The matrix representation for our discrete non-homogeneous Hodge star is diagonal, with positive diagonal entries. The resulting linear system of equations for flux and pressure are saddle type, with a diagonal matrix as the top left block. Our method requires the use of meshes in which each simplex contains its circumcenter. The performance of the proposed numerical method is illustrated on many standard test problems. These include patch tests in two and three dimensions, comparison with analytically known solution in two dimensions, layered medium with alternating permeability values, and a test with a change in permeability along the flow direction. A short introduction to the relevant parts of smooth and discrete exterior calculus is included in this paper. We also include a discussion of the boundary condition in terms of exterior calculus.",
"title": ""
},
{
"docid": "neg:1840468_1",
"text": "The Web of Things is an active research field which aims at promoting the easy access and handling of smart things' digital representations through the adoption of Web standards and technologies. While huge research and development efforts have been spent on lower level networks and software technologies, it has been recognized that little experience exists instead in modeling and building applications for the Web of Things. Although several works have proposed Representational State Transfer (REST) inspired approaches for the Web of Things, a main limitation is that poor support is provided to web developers for speeding up the development of Web of Things applications while taking full advantage of REST benefits. In this paper, we propose a framework which supports developers in modeling smart things as web resources, exposing them through RESTful Application Programming Interfaces (APIs) and developing applications on top of them. The framework consists of a Web Resource information model, a middleware, and tools for developing and publishing smart things' digital representations on the Web. We discuss the framework compliance with REST guidelines and its major implementation choices. Finally, we report on our test activities carried out within the SmartSantander European Project to evaluate the use and proficiency of our framework in a smart city scenario.",
"title": ""
},
{
"docid": "neg:1840468_2",
"text": "We investigate a novel and important application domain for deep RL: network routing. The question of whether/when traditional network protocol design, which relies on the application of algorithmic insights by human experts, can be replaced by a data-driven approach has received much attention recently. We explore this question in the context of the, arguably, most fundamental networking task: routing. Can ideas and techniques from machine learning be leveraged to automatically generate “good” routing configurations? We observe that the routing domain poses significant challenges for data-driven network protocol design and report on preliminary results regarding the power of data-driven routing. Our results suggest that applying deep reinforcement learning to this context yields high performance and is thus a promising direction for further research. We outline a research agenda for data-driven routing.",
"title": ""
},
{
"docid": "neg:1840468_3",
"text": "This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for chronic illnesses. Before applying any off-the-shelf reinforcement learning methods in this setting, we must first tackle a number of challenges. We outline some of these challenges and present methods for overcoming them. First, we describe a multiple imputation approach to overcome the problem of missing data. Second, we discuss the use of function approximation in the context of a highly variable observation set. Finally, we discuss approaches to summarizing the evidence in the data for recommending a particular action and quantifying the uncertainty around the Q-function of the recommended policy. We present the results of applying these methods to real clinical trial data of patients with schizophrenia.",
"title": ""
},
{
"docid": "neg:1840468_4",
"text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.",
"title": ""
},
{
"docid": "neg:1840468_5",
"text": "Software defect prediction helps to optimize testing resources allocation by identifying defect-prone modules prior to testing. Most existing models build their prediction capability based on a set of historical data, presumably from the same or similar project settings as those under prediction. However, such historical data is not always available in practice. One potential way of predicting defects in projects without historical data is to learn predictors from data of other projects. This paper investigates defect predictions in the cross-project context focusing on the selection of training data. We conduct three large-scale experiments on 34 data sets obtained from 10 open source projects. Major conclusions from our experiments include: (1) in the best cases, training data from other projects can provide better prediction results than training data from the same project; (2) the prediction results obtained using training data from other projects meet our criteria for acceptance on the average level, defects in 18 out of 34 cases were predicted at a Recall greater than 70% and a Precision greater than 50%; (3) results of cross-project defect predictions are related with the distributional characteristics of data sets which are valuable for training data selection. We further propose an approach to automatically select suitable training data for projects without historical data. Prediction results provided by the training data selected by using our approach are comparable with those provided by training data from the same project.",
"title": ""
},
{
"docid": "neg:1840468_6",
"text": "Most companies favour the creation and nurturing of long-term relationships with customers because retaining customers is more profitable than acquiring new ones. Churn prediction is a predictive analytics technique to identify churning customers ahead of their departure and enable customer relationship managers to take action to keep them. This work evaluates the development of an expert system for churn prediction and prevention using a Hidden Markov model (HMM). A HMM is implemented on unique data from a mobile application and its predictive performance is compared to other algorithms that are commonly used for churn prediction: Logistic Regression, Neural Network and Support Vector Machine. Predictive performance of the HMM is not outperformed by the other algorithms. HMM has substantial advantages for use in expert systems though due to low storage and computational requirements and output of highly relevant customer motivational states. Generic session data of the mobile app is used to train and test the models which makes the system very easy to deploy and the findings applicable to the whole ecosystem of mobile apps distributed in Apple's App and Google's Play Store.",
"title": ""
},
{
"docid": "neg:1840468_7",
"text": "The view information of a chest X-ray (CXR), such as frontal or lateral, is valuable in computer aided diagnosis (CAD) of CXRs. For example, it helps for the selection of atlas models for automatic lung segmentation. However, very often, the image header does not provide such information. In this paper, we present a new method for classifying a CXR into two categories: frontal view vs. lateral view. The method consists of three major components: image pre-processing, feature extraction, and classification. The features we selected are image profile, body size ratio, pyramid of histograms of orientation gradients, and our newly developed contour-based shape descriptor. The method was tested on a large (more than 8,200 images) CXR dataset hosted by the National Library of Medicine. The very high classification accuracy (over 99% for 10-fold cross validation) demonstrates the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "neg:1840468_8",
"text": "This paper presents a trajectory tracking control design which provides the essential spatial-temporal feedback control capability for fixed-wing unmanned aerial vehicles (UAVs) to execute a time critical mission reliably. In this design, a kinematic trajectory tracking control law and a control gain selection method are developed to allow the control law to be implemented on a fixed-wing UAV based on the platform's dynamic capability. The tracking control design assumes the command references of the heading and airspeed control systems are the accessible control inputs, and it does not impose restrictive model assumptions on the UAV's control systems. The control design is validated using a high-fidelity nonlinear six degrees of freedom (6DOF) model and the reported results suggest that the proposed tracking control design is able to track time-parameterized trajectories stably with robust control performance.",
"title": ""
},
{
"docid": "neg:1840468_9",
"text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.",
"title": ""
},
{
"docid": "neg:1840468_10",
"text": "3D biomaterial printing has emerged as a potentially revolutionary technology, promising to transform both research and medical therapeutics. Although there has been recent progress in the field, on-demand fabrication of functional and transplantable tissues and organs is still a distant reality. To advance to this point, there are two major technical challenges that must be overcome. The first is expanding upon the limited variety of available 3D printable biomaterials (biomaterial inks), which currently do not adequately represent the physical, chemical, and biological complexity and diversity of tissues and organs within the human body. Newly developed biomaterial inks and the resulting 3D printed constructs must meet numerous interdependent requirements, including those that lead to optimal printing, structural, and biological outcomes. The second challenge is developing and implementing comprehensive biomaterial ink and printed structure characterization combined with in vitro and in vivo tissueand organ-specific evaluation. This perspective outlines considerations for addressing these technical hurdles that, once overcome, will facilitate rapid advancement of 3D biomaterial printing as an indispensable tool for both investigating complex tissue and organ morphogenesis and for developing functional devices for a variety of diagnostic and regenerative medicine applications. PAPER 5 Contributed equally to this work. REcEivEd",
"title": ""
},
{
"docid": "neg:1840468_11",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "neg:1840468_12",
"text": "External border surveillance is critical to the security of every state and the challenges it poses are changing and likely to intensify. Wireless sensor networks (WSN) are a low cost technology that provide an intelligence-led solution to effective continuous monitoring of large, busy, and complex landscapes. The linear network topology resulting from the structure of the monitored area raises challenges that have not been adequately addressed in the literature to date. In this paper, we identify an appropriate metric to measure the quality of WSN border crossing detection. Furthermore, we propose a method to calculate the required number of sensor nodes to deploy in order to achieve a specified level of coverage according to the chosen metric in a given belt region, while maintaining radio connectivity within the network. Then, we contribute a novel cross layer routing protocol, called levels division graph (LDG), designed specifically to address the communication needs and link reliability for topologically linear WSN applications. The performance of the proposed protocol is extensively evaluated in simulations using realistic conditions and parameters. LDG simulation results show significant performance gains when compared with its best rival in the literature, dynamic source routing (DSR). Compared with DSR, LDG improves the average end-to-end delays by up to 95%, packet delivery ratio by up to 20%, and throughput by up to 60%, while maintaining comparable performance in terms of normalized routing load and energy consumption.",
"title": ""
},
{
"docid": "neg:1840468_13",
"text": "Online social networks (OSNs) are increasingly threatened by social bots which are software-controlled OSN accounts that mimic human users with malicious intentions. A social botnet refers to a group of social bots under the control of a single botmaster, which collaborate to conduct malicious behavior while mimicking the interactions among normal OSN users to reduce their individual risk of being detected. We demonstrate the effectiveness and advantages of exploiting a social botnet for spam distribution and digital-influence manipulation through real experiments on Twitter and also trace-driven simulations. We also propose the corresponding countermeasures and evaluate their effectiveness. Our results can help understand the potentially detrimental effects of social botnets and help OSNs improve their bot(net) detection systems.",
"title": ""
},
{
"docid": "neg:1840468_14",
"text": "The paper first discusses the reasons why simplified solutions for the mechanical structure of fingers in robotic hands should be considered a worthy design goal. After a brief discussion about the mechanical solutions proposed so far for robotic fingers, a different design approach is proposed. It considers finger structures made of rigid links connected by flexural hinges, with joint actuation obtained by means of flexures that can be guided inside each finger according to different patterns. A simplified model of one of these structures is then presented, together with preliminary results of simulation, in order to evaluate the feasibility of the concept. Examples of technological implementation are finally presented and the perspective and problems of application are briefly discussed.",
"title": ""
},
{
"docid": "neg:1840468_15",
"text": "Nearly half a century ago, military organizations introduced “Tempest” emission-security test standards to control information leakage from unintentional electromagnetic emanations of digital electronics. The nature of these emissions has changed with evolving technology; electromechanic devices have vanished and signal frequencies increased several orders of magnitude. Recently published eavesdropping attacks on modern flat-panel displays and cryptographic coprocessors demonstrate that the risk remains acute for applications with high protection requirements. The ultra-wideband signal processing technology needed for practical attacks finds already its way into consumer electronics. Current civilian RFI limits are entirely unsuited for emission security purposes. Only an openly available set of test standards based on published criteria will help civilian vendors and users to estimate and manage emission-security risks appropriately. This paper outlines a proposal and rationale for civilian electromagnetic emission-security limits. While the presented discussion aims specifically at far-field video eavesdropping in the VHF and UHF bands, the most easy to demonstrate risk, much of the presented approach for setting test limits could be adapted equally to address other RF emanation risks.",
"title": ""
},
{
"docid": "neg:1840468_16",
"text": "While patients with poor functional health literacy (FHL) have difficulties reading and comprehending written medical instructions, it is not known whether these patients also experience problems with other modes of communication, such as face-to-face encounters with primary care physicians. We enrolled 408 English- and Spanish-speaking diabetes patients to examine whether patients with inadequate FHL report worse communication than patients with adequate FHL. We assessed patients' experiences of communication using sub-scales from the Interpersonal Processes of Care in Diverse Populations instrument. In multivariate models, patients with inadequate FHL, compared to patients with adequate FHL, were more likely to report worse communication in the domains of general clarity (adjusted odds ratio [AOR] 6.29, P<0.01), explanation of condition (AOR 4.85, P=0.03), and explanation of processes of care (AOR 2.70, p=0.03). Poor FHL appears to be a marker for oral communication problems, particularly in the technical, explanatory domains of clinician-patient dialogue. Research is needed to identify strategies to improve communication for this group of patients.",
"title": ""
},
{
"docid": "neg:1840468_17",
"text": "In this paper, we present a systematic framework for recognizing realistic actions from videos “in the wild”. Such unconstrained videos are abundant in personal collections as well as on the Web. Recognizing action from such videos has not been addressed extensively, primarily due to the tremendous variations that result from camera motion, background clutter, changes in object appearance, and scale, etc. The main challenge is how to extract reliable and informative features from the unconstrained videos. We extract both motion and static features from the videos. Since the raw features of both types are dense yet noisy, we propose strategies to prune these features. We use motion statistics to acquire stable motion features and clean static features. Furthermore, PageRank is used to mine the most informative static features. In order to further construct compact yet discriminative visual vocabularies, a divisive information-theoretic algorithm is employed to group semantically related features. Finally, AdaBoost is chosen to integrate all the heterogeneous yet complementary features for recognition. We have tested the framework on the KTH dataset and our own dataset consisting of 11 categories of actions collected from YouTube and personal videos, and have obtained impressive results for action recognition and action localization.",
"title": ""
},
{
"docid": "neg:1840468_18",
"text": "BACKGROUND\nThe efficacy of closure of a patent foramen ovale (PFO) in the prevention of recurrent stroke after cryptogenic stroke is uncertain. We investigated the effect of PFO closure combined with antiplatelet therapy versus antiplatelet therapy alone on the risks of recurrent stroke and new brain infarctions.\n\n\nMETHODS\nIn this multinational trial involving patients with a PFO who had had a cryptogenic stroke, we randomly assigned patients, in a 2:1 ratio, to undergo PFO closure plus antiplatelet therapy (PFO closure group) or to receive antiplatelet therapy alone (antiplatelet-only group). Imaging of the brain was performed at the baseline screening and at 24 months. The coprimary end points were freedom from clinical evidence of ischemic stroke (reported here as the percentage of patients who had a recurrence of stroke) through at least 24 months after randomization and the 24-month incidence of new brain infarction, which was a composite of clinical ischemic stroke or silent brain infarction detected on imaging.\n\n\nRESULTS\nWe enrolled 664 patients (mean age, 45.2 years), of whom 81% had moderate or large interatrial shunts. During a median follow-up of 3.2 years, clinical ischemic stroke occurred in 6 of 441 patients (1.4%) in the PFO closure group and in 12 of 223 patients (5.4%) in the antiplatelet-only group (hazard ratio, 0.23; 95% confidence interval [CI], 0.09 to 0.62; P=0.002). The incidence of new brain infarctions was significantly lower in the PFO closure group than in the antiplatelet-only group (22 patients [5.7%] vs. 20 patients [11.3%]; relative risk, 0.51; 95% CI, 0.29 to 0.91; P=0.04), but the incidence of silent brain infarction did not differ significantly between the study groups (P=0.97). Serious adverse events occurred in 23.1% of the patients in the PFO closure group and in 27.8% of the patients in the antiplatelet-only group (P=0.22). Serious device-related adverse events occurred in 6 patients (1.4%) in the PFO closure group, and atrial fibrillation occurred in 29 patients (6.6%) after PFO closure.\n\n\nCONCLUSIONS\nAmong patients with a PFO who had had a cryptogenic stroke, the risk of subsequent ischemic stroke was lower among those assigned to PFO closure combined with antiplatelet therapy than among those assigned to antiplatelet therapy alone; however, PFO closure was associated with higher rates of device complications and atrial fibrillation. (Funded by W.L. Gore and Associates; Gore REDUCE ClinicalTrials.gov number, NCT00738894 .).",
"title": ""
},
{
"docid": "neg:1840468_19",
"text": "Modern search systems are based on dozens or even hundreds of ranking features. The dueling bandit gradient descent (DBGD) algorithm has been shown to effectively learn combinations of these features solely from user interactions. DBGD explores the search space by comparing a possibly improved ranker to the current production ranker. To this end, it uses interleaved comparison methods, which can infer with high sensitivity a preference between two rankings based only on interaction data. A limiting factor is that it can compare only to a single exploratory ranker. We propose an online learning to rank algorithm called multileave gradient descent (MGD) that extends DBGD to learn from so-called multileaved comparison methods that can compare a set of rankings instead of merely a pair. We show experimentally that MGD allows for better selection of candidates than DBGD without the need for more comparisons involving users. An important implication of our results is that orders of magnitude less user interaction data is required to find good rankers when multileaved comparisons are used within online learning to rank. Hence, fewer users need to be exposed to possibly inferior rankers and our method allows search engines to adapt more quickly to changes in user preferences.",
"title": ""
}
] |
1840469 | Vital Sign Monitoring Through the Back Using an UWB Impulse Radar With Body Coupled Antennas | [
{
"docid": "pos:1840469_0",
"text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.",
"title": ""
}
] | [
{
"docid": "neg:1840469_0",
"text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor",
"title": ""
},
{
"docid": "neg:1840469_1",
"text": "Splitting of the behavioural activity phase has been found in nocturnal rodents with suprachiasmatic nucleus (SCN) coupling disorder. A similar phenomenon was observed in the sleep phase in the diurnal human discussed here, suggesting that there are so-called evening and morning oscillators in the SCN of humans. The present case suffered from bipolar disorder refractory to various treatments, and various circadian rhythm sleep disorders, such as delayed sleep phase, polyphasic sleep, separation of the sleep bout resembling splitting and circabidian rhythm (48 h), were found during prolonged depressive episodes with hypersomnia. Separation of sleep into evening and morning components and delayed sleep-offset (24.69-h cycle) developed when lowering and stopping the dose of aripiprazole (APZ). However, resumption of APZ improved these symptoms in 2 weeks, accompanied by improvement in the patient's depressive state. Administration of APZ may improve various circadian rhythm sleep disorders, as well as improve and prevent manic-depressive episodes, via augmentation of coupling in the SCN network.",
"title": ""
},
{
"docid": "neg:1840469_2",
"text": "In this paper, we present a carry skip adder (CSKA) structure that has a higher speed yet lower energy consumption compared with the conventional one. The speed enhancement is achieved by applying concatenation and incrementation schemes to improve the efficiency of the conventional CSKA (Conv-CSKA) structure. In addition, instead of utilizing multiplexer logic, the proposed structure makes use of AND-OR-Invert (AOI) and OR-AND-Invert (OAI) compound gates for the skip logic. The structure may be realized with both fixed stage size and variable stage size styles, wherein the latter further improves the speed and energy parameters of the adder. Finally, a hybrid variable latency extension of the proposed structure, which lowers the power consumption without considerably impacting the speed, is presented. This extension utilizes a modified parallel structure for increasing the slack time, and hence, enabling further voltage reduction. The proposed structures are assessed by comparing their speed, power, and energy parameters with those of other adders using a 45-nm static CMOS technology for a wide range of supply voltages. The results that are obtained using HSPICE simulations reveal, on average, 44% and 38% improvements in the delay and energy, respectively, compared with those of the Conv-CSKA. In addition, the power-delay product was the lowest among the structures considered in this paper, while its energy-delay product was almost the same as that of the Kogge-Stone parallel prefix adder with considerably smaller area and power consumption. Simulations on the proposed hybrid variable latency CSKA reveal reduction in the power consumption compared with the latest works in this field while having a reasonably high speed.",
"title": ""
},
{
"docid": "neg:1840469_3",
"text": "Interactive narrative is a form of storytelling in which users affect a dramatic storyline through actions by assuming the role of characters in a virtual world. This extended abstract outlines the SCHEHERAZADE-IF system, which uses crowdsourcing and artificial intelligence to automatically construct text-based interactive narrative experiences.",
"title": ""
},
{
"docid": "neg:1840469_4",
"text": "UNLABELLED\nDue to the localized surface plasmon (LSP) effect induced by Ag nanoparticles inside black silicon, the optical absorption of black silicon is enhanced dramatically in near-infrared range (1,100 to 2,500 nm). The black silicon with Ag nanoparticles shows much higher absorption than black silicon fabricated by chemical etching or reactive ion etching over ultraviolet to near-infrared (UV-VIS-NIR, 250 to 2,500 nm). The maximum absorption even increased up to 93.6% in the NIR range (820 to 2,500 nm). The high absorption in NIR range makes LSP-enhanced black silicon a potential material used for NIR-sensitive optoelectronic device.\n\n\nPACS\n78.67.Bf; 78.30.Fs; 78.40.-q; 42.70.Gi.",
"title": ""
},
{
"docid": "neg:1840469_5",
"text": "The observed poor quality of graduates of some Nigerian Universities in recent times has been partly traced to inadequacies of the National University Admission Examination System. In this study an Artificial Neural Network (ANN) model, for predicting the likely performance of a candidate being considered for admission into the university was developed and tested. Various factors that may likely influence the performance of a student were identified. Such factors as ordinary level subjects’ scores and subjects’ combination, matriculation examination scores, age on admission, parental background, types and location of secondary school attended and gender, among others, were then used as input variables for the ANN model. A model based on the Multilayer Perceptron Topology was developed and trained using data spanning five generations of graduates from an Engineering Department of University of Ibadan, Nigeria’s first University. Test data evaluation shows that the ANN model is able to correctly predict the performance of more than 70% of prospective students. (",
"title": ""
},
{
"docid": "neg:1840469_6",
"text": "Developing Question Answering systems has been one of the important research issues because it requires insights from a variety of disciplines, including, Artificial Intelligence, Information Retrieval, Information Extraction, Natural Language Processing, and Psychology. In this paper we realize a formal model for a lightweight semantic–based open domain yes/no Arabic question answering system based on paragraph retrieval (with variable length). We propose a constrained semantic representation. Using an explicit unification framework based on semantic similarities and query expansion (synonyms and antonyms). This frequently improves the precision of the system. Employing the passage retrieval system achieves a better precision by retrieving more paragraphs that contain relevant answers to the question; It significantly reduces the amount of text to be processed by the system.",
"title": ""
},
{
"docid": "neg:1840469_7",
"text": "There are currently very few practical methods for assessin g the quality of resources or the reliability of other entities in the o nline environment. This makes it difficult to make decisions about which resources ca n be relied upon and which entities it is safe to interact with. Trust and repu tation systems are aimed at solving this problem by enabling service consumers to eliably assess the quality of services and the reliability of entities befo r they decide to use a particular service or to interact with or depend on a given en tity. Such systems should also allow serious service providers and online play ers to correctly represent the reliability of themselves and the quality of thei r s rvices. In the case of reputation systems, the basic idea is to let parties rate e ch other, for example after the completion of a transaction, and use the aggreg ated ratings about a given party to derive its reputation score. In the case of tru st systems, the basic idea is to analyse and combine paths and networks of trust rel ationships in order to derive measures of trustworthiness of specific nodes. Rep utation scores and trust measures can assist other parties in deciding whether or not to transact with a given party in the future, and whether it is safe to depend on a given resource or entity. This represents an incentive for good behaviour and for offering reliable resources, which thereby tends to have a positive effect on t he quality of online markets and communities. This chapter describes the backgr ound, current status and future trend of online trust and reputation systems.",
"title": ""
},
{
"docid": "neg:1840469_8",
"text": "It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times.",
"title": ""
},
{
"docid": "neg:1840469_9",
"text": "A significant challenge for the practical application of reinforcement learning in the real world is the need to specify an oracle reward function that correctly defines a task. Inverse reinforcement learning (IRL) seeks to avoid this challenge by instead inferring a reward function from expert behavior. While appealing, it can be impractically expensive to collect datasets of demonstrations that cover the variation common in the real world (e.g. opening any type of door). Thus in practice, IRL must commonly be performed with only a limited set of demonstrations where it can be exceedingly difficult to unambiguously recover a reward function. In this work, we exploit the insight that demonstrations from other tasks can be used to constrain the set of possible reward functions by learning a “prior” that is specifically optimized for the ability to infer expressive reward functions from limited numbers of demonstrations. We demonstrate that our method can efficiently recover rewards from images for novel tasks and provide intuition as to how our approach is analogous to learning a prior.",
"title": ""
},
{
"docid": "neg:1840469_10",
"text": "Adversarial machine learning in the context of image processing and related applications has received a large amount of attention. However, adversarial machine learning, especially adversarial deep learning, in the context of malware detection has received much less attention despite its apparent importance. In this paper, we present a framework for enhancing the robustness of Deep Neural Networks (DNNs) against adversarial malware samples, dubbed Hashing Transformation Deep Neural Networks (HashTran-DNN). The core idea is to use hash functions with a certain locality-preserving property to transform samples to enhance the robustness of DNNs in malware classification. The framework further uses a Denoising Auto-Encoder (DAE) regularizer to reconstruct the hash representations of samples, making the resulting DNN classifiers capable of attaining the locality information in the latent space. We experiment with two concrete instantiations of the HashTranDNN framework to classify Android malware. Experimental results show that four known attacks can render standard DNNs useless in classifying Android malware, that known defenses can at most defend three of the four attacks, and that HashTran-DNN can effectively defend against all of the four attacks.",
"title": ""
},
{
"docid": "neg:1840469_11",
"text": "The paper presents a finite-element method-based design and analysis of interior permanent magnet synchronous motor with flux barriers (IPMSMFB). Various parameters of IPMSMFB rotor structure were taken into account at determination of a suitable rotor construction. On the basis of FEM analysis the rotor of IPMSMFB with three-flux barriers was built. Output torque capability and flux weakening performance of IPMSMFB were compared with performances of conventional interior permanent magnet synchronous motor (IPMSM), having the same rotor geometrical dimensions and the same stator construction. The predicted performance of conventional IPMSM and IPMSMFB was confirmed with the measurements over a wide-speed range of constant output power operation.",
"title": ""
},
{
"docid": "neg:1840469_12",
"text": "The SIGIR 2016 workshop on Neural Information Retrieval (Neu-IR) took place on 21 July, 2016 in Pisa. The goal of the Neu-IR (pronounced \"New IR\") workshop was to serve as a forum for academic and industrial researchers, working at the intersection of information retrieval (IR) and machine learning, to present new work and early results, compare notes on neural network toolkits, share best practices, and discuss the main challenges facing this line of research. In total, 19 papers were presented, including oral and poster presentations. The workshop program also included a session on invited \"lightning talks\" to encourage participants to share personal insights and negative results with the community. The workshop was well-attended with more than 120 registrations.",
"title": ""
},
{
"docid": "neg:1840469_13",
"text": "Surveys throughout the world have shown consistently that persons over 65 are far less likely to be victims of crime than younger age groups. However, many elderly people are unduly fearful about crime which has an adverse effect on their quality of life. This Trends and Issues puts this matter into perspective, but also discusses the more covert phenomena of abuse and neglect of the elderly. Our senior citizens have earned the right to live in dignity and without fear: the community as a whole should contribute to this process. Duncan Chappell Director",
"title": ""
},
{
"docid": "neg:1840469_14",
"text": "Acquired upper extremity amputations beyond the finger can have substantial physical, psychological, social, and economic consequences for the patient. The hand surgeon is one of a team of specialists in the care of these patients, but the surgeon plays a critical role in the surgical management of these wounds. The execution of a successful amputation at each level of the limb allows maximum use of the residual extremity, with or without a prosthesis, and minimizes the known complications of these injuries. This article reviews current surgical options in performing and managing upper extremity amputations proximal to the finger.",
"title": ""
},
{
"docid": "neg:1840469_15",
"text": "Recommender systems are widely used in online applications since they enable personalized service to the users. The underlying collaborative filtering techniques work on user’s data which are mostly privacy sensitive and can be misused by the service provider. To protect the privacy of the users, we propose to encrypt the privacy sensitive data and generate recommendations by processing them under encryption. With this approach, the service provider learns no information on any user’s preferences or the recommendations made. The proposed method is based on homomorphic encryption schemes and secure multiparty computation (MPC) techniques. The overhead of working in the encrypted domain is minimized by packing data as shown in the complexity analysis.",
"title": ""
},
{
"docid": "neg:1840469_16",
"text": "When attempting to improve the performance of a deep learning system, there are more or less three approaches one can take: the first is to improve the structure of the model, perhaps adding another layer, switching from simple recurrent units to LSTM cells [4], or–in the realm of NLP–taking advantage of syntactic parses (e.g. as in [13, et seq.]); another approach is to improve the initialization of the model, guaranteeing that the early-stage gradients have certain beneficial properties [3], or building in large amounts of sparsity [6], or taking advantage of principles of linear algebra [15]; the final approach is to try a more powerful learning algorithm, such as including a decaying sum over the previous gradients in the update [12], by dividing each parameter update by the L2 norm of the previous updates for that parameter [2], or even by foregoing first-order algorithms for more powerful but more computationally costly second order algorithms [9]. This paper has as its goal the third option—improving the quality of the final solution by using a faster, more powerful learning algorithm.",
"title": ""
},
{
"docid": "neg:1840469_17",
"text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\nhabibima@informatik.hu-berlin.de.",
"title": ""
},
{
"docid": "neg:1840469_18",
"text": "Following the growing share of wind energy in electric power systems, several wind power forecasting techniques have been reported in the literature in recent years. In this paper, a wind power forecasting strategy composed of a feature selection component and a forecasting engine is proposed. The feature selection component applies an irrelevancy filter and a redundancy filter to the set of candidate inputs. The forecasting engine includes a new enhanced particle swarm optimization component and a hybrid neural network. The proposed wind power forecasting strategy is applied to real-life data from wind power producers in Alberta, Canada and Oklahoma, U.S. The presented numerical results demonstrate the efficiency of the proposed strategy, compared to some other existing wind power forecasting methods.",
"title": ""
},
{
"docid": "neg:1840469_19",
"text": "We propose a holographic-laser-drawing volumetric display using a computer-generated hologram displayed on a liquid crystal spatial light modulator and multilayer fluorescent screen. The holographic-laser-drawing technique has enabled three things; (i) increasing the number of voxels of the volumetric graphics per unit time; (ii) increasing the total input energy to the volumetric display because the maximum energy incident at a point in the multilayer fluorescent screen is limited by the damage threshold; (iii) controlling the size, shape and spatial position of voxels. In this paper, we demonstrated (i) and (ii). The multilayer fluorescent screen was newly developed to display colored voxels. The thin layer construction of the multilayer fluorescent screen minimized the axial length of the voxels. A two-color volumetric display with blue-green voxels and red voxels were demonstrated.",
"title": ""
}
] |
1840470 | An Embedded System-on-Chip Architecture for Real-time Visual Detection and Matching | [
{
"docid": "pos:1840470_0",
"text": "Feature extraction is an essential part in applications that require computer vision to recognize objects in an image processed. To extract the features robustly, feature extraction algorithms are often very demanding in computation so that the performance achieved by pure software is far from real-time. Among those feature extraction algorithms, scale-invariant feature transform (SIFT) has gained a lot of popularity recently. In this paper, we propose an all-hardware SIFT accelerator-the fastest of its kind to our knowledge. It consists of two interactive hardware components, one for key point identification, and the other for feature descriptor generation. We successfully developed a segment buffer scheme that could not only feed data to the computing modules in a data-streaming manner, but also reduce about 50% memory requirement than a previous work. With a parallel architecture incorporating a three-stage pipeline, the processing time of the key point identification is only 3.4 ms for one video graphics array (VGA) image. Taking also into account the feature descriptor generation part, the overall SIFT processing time for a VGA image can be kept within 33 ms (to support real-time operation) when the number of feature points to be extracted is fewer than 890.",
"title": ""
},
{
"docid": "pos:1840470_1",
"text": "Ever since the introduction of freely programmable hardware components into modern graphics hardware, graphics processing units (GPUs) have become increasingly popular for general purpose computations. Especially when applied to computer vision algorithms where a Single set of Instructions has to be executed on Multiple Data (SIMD), GPU-based algorithms can provide a major increase in processing speed compared to their CPU counterparts. This paper presents methods that take full advantage of modern graphics card hardware for real-time scale invariant feature detection and matching. The focus lies on the extraction of feature locations and the generation of feature descriptors from natural images. The generation of these feature-vectors is based on the Speeded Up Robust Features (SURF) method [1] due to its high stability against rotation, scale and changes in lighting condition of the processed images. With the presented methods feature detection and matching can be performed at framerates exceeding 100 frames per second for 640 times 480 images. The remaining time can then be spent on fast matching against large feature databases on the GPU while the CPU can be used for other tasks.",
"title": ""
}
] | [
{
"docid": "neg:1840470_0",
"text": "Reasoning about objects and their affordances is a fundamental problem for visual intelligence. Most of the previous work casts this problem as a classification task where separate classifiers are trained to label objects, recognize attributes, or assign affordances. In this work, we consider the problem of object affordance reasoning using a knowledge base representation. Diverse information of objects are first harvested from images and other meta-data sources. We then learn a knowledge base (KB) using a Markov Logic Network (MLN). Given the learned KB, we show that a diverse set of visual inference tasks can be done in this unified framework without training separate classifiers, including zeroshot affordance prediction and object recognition given human poses.",
"title": ""
},
{
"docid": "neg:1840470_1",
"text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.",
"title": ""
},
{
"docid": "neg:1840470_2",
"text": "Using 2 phase-change memory (PCM) devices per synapse, a 3-layer perceptron network with 164,885 synapses is trained on a subset (5000 examples) of the MNIST database of handwritten digits using a backpropagation variant suitable for NVM+selector crossbar arrays, obtaining a training (generalization) accuracy of 82.2% (82.9%). Using a neural network (NN) simulator matched to the experimental demonstrator, extensive tolerancing is performed with respect to NVM variability, yield, and the stochasticity, linearity and asymmetry of NVM-conductance response.",
"title": ""
},
{
"docid": "neg:1840470_3",
"text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.",
"title": ""
},
{
"docid": "neg:1840470_4",
"text": "The current paper proposes a slack-based version of the Super SBM, which is an alternative superefficiency model for the SBM proposed by Tone. Our two-stage approach provides the same superefficiency score as that obtained by the Super SBM model when the evaluated DMU is efficient and yields the same efficiency score as that obtained by the SBM model when the evaluated DMU is inefficient. The projection identified by the Super SBM model may not be strongly Pareto efficient; however, the projection identified from our approach is strongly Pareto efficient. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840470_5",
"text": "This is the proposal for RumourEval-2019, which will run in early 2019 as part of that year’s SemEval event. Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the dangers of “fake news” have become a mainstream concern. Yet automated support for rumour checking remains in its infancy. For this reason, it is important that a shared task in this area continues to provide a focus for effort, which is likely to increase. We therefore propose a continuation in which the veracity of further rumours is determined, and as previously, supportive of this goal, tweets discussing them are classified according to the stance they take regarding the rumour. Scope is extended compared with the first RumourEval, in that the dataset is substantially expanded to include Reddit as well as Twitter data, and additional languages are also",
"title": ""
},
{
"docid": "neg:1840470_6",
"text": "Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific task is called few-shot learning. To address it, we propose a novel algorithm for fewshot learning using discrete geometry, in the sense that the samples in a class are modeled as a reduced simplex. The volume of the simplex is used for the measurement of class scatter. During testing, combined with the test sample and the points in the class, a new simplex is formed. Then the similarity between the test sample and the class can be quantized with the ratio of volumes of the new simplex to the original class simplex. Moreover, we present an approach to constructing simplices using local regions of feature maps yielded by convolutional neural networks. Experiments on Omniglot and miniImageNet verify the effectiveness of our simplex algorithm on few-shot learning.",
"title": ""
},
{
"docid": "neg:1840470_7",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
},
{
"docid": "neg:1840470_8",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "neg:1840470_9",
"text": "As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. We propose a two-stage framework, namely SmartCrawler, for efficient harvesting deep web interfaces. In the first stage, SmartCrawler performs site-based searching for center pages with the help of search engines, avoiding visiting a large number of pages. To achieve more accurate results for a focused crawl, SmartCrawler ranks websites to prioritize highly relevant ones for a given topic. In the second stage, SmartCrawler achieves fast in-site searching by excavating most relevant links with an adaptive link-ranking. To eliminate bias on visiting some highly relevant links in hidden web directories, we design a link tree data structure to achieve wider coverage for a website. Our experimental results on a set of representative domains show the agility and accuracy of our proposed crawler framework, which efficiently retrieves deep-web interfaces from large-scale sites and achieves higher harvest rates than other crawlers.",
"title": ""
},
{
"docid": "neg:1840470_10",
"text": "This paper presents a low power continuous time 2nd order Low Pass Butterworth filter operating at power supply of 0.5V suitably designed for biomedical applications. A 3-dB bandwidth of 100 Hz using technology node of 0.18μm is achieved. The operational transconductance amplifier is a significant building block in continuous time filter design. To achieve necessary voltage headroom a pseudo-differential architecture is used to design bulk driven transconductor. In contrast, to the gate-driven OTA bulk-driven have the ability to operate over a wide input range. The output common mode voltage of the transconductor is set by a Common Mode Feedback (CMFB) circuit. The simulation results show that the filter has a peak-to-peak signal swing of 150mV (differential) for 1% THD, a dynamic range of 74.62 dB and consumes a total power of 0.225μW when operating at a supply voltage of 0.5V. The Figure of Merit (FOM) achieved by the filter is 0.055 fJ, lowest among similar low-voltage filters found in the literature.",
"title": ""
},
{
"docid": "neg:1840470_11",
"text": "Stroke is a leading cause of severe physical disability, causing a range of impairments. Frequently stroke survivors are left with partial paralysis on one side of the body and movement can be severely restricted in the affected side’s hand and arm. We know that effective rehabilitation must be early, intensive and repetitive, which leads to the challenge of how to maintain motivation for people undergoing therapy. This paper discusses why games may be an effective way of addressing the problem of engagement in therapy and analyses which game design patterns may be important for rehabilitation. We present a number of serious games that our group has developed for upper limb rehabilitation. Results of an evaluation of the games are presented which indicate that they may be appropriate for people with stroke.",
"title": ""
},
{
"docid": "neg:1840470_12",
"text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.",
"title": ""
},
{
"docid": "neg:1840470_13",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/class#fall2012/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) Please indicate the submission time and number of late dates clearly in your submission. SCPD students: Please email your solutions to cs229-qa@cs.stanford.edu with the subject line \" Problem Set 2 Submission \". The first page of your submission should be the homework routing form, which can be found on the SCPD website. Your submission (including the routing form) must be a single pdf file, or we may not be able to grade it. If you are writing your solutions out by hand, please write clearly and in a reasonably large font using a dark pen to improve legibility. 1. [15 points] Constructing kernels In class, we saw that by choosing a kernel K(x, z) = φ(x) T φ(z), we can implicitly map data to a high dimensional space, and have the SVM algorithm work in that space. One way to generate kernels is to explicitly define the mapping φ to a higher dimensional space, and then work out the corresponding K. However in this question we are interested in direct construction of kernels. I.e., suppose we have a function K(x, z) that we think gives an appropriate similarity measure for our learning problem, and we are considering plugging K into the SVM as the kernel function. However for K(x, z) to be a valid kernel, it must correspond to an inner product in some higher dimensional space resulting from some feature mapping φ. Mercer's theorem tells us that K(x, z) is a (Mercer) kernel if and only if for any finite set {x (1) ,. .. , x (m) }, the matrix K is symmetric and positive semidefinite, where the square matrix K ∈ R m×m is given by K ij = K(x (i) , x (j)). Now here comes the question: Let K 1 , K 2 be kernels …",
"title": ""
},
{
"docid": "neg:1840470_14",
"text": "Over the past decade, information technology has dramatically changed the context in which economic transactions take place. Increasingly, transactions are computer-mediated, so that, relative to humanhuman interactions, human-computer interactions are gaining in relevance. Computer-mediated transactions, and in particular those related to the Internet, increase perceptions of uncertainty. Therefore, trust becomes a crucial factor in the reduction of these perceptions. To investigate this important construct, we studied individual trust behavior and the underlying brain mechanisms through a multi-round trust game. Participants acted in the role of an investor, playing against both humans and avatars. The behavioral results show that participants trusted avatars to a similar degree as they trusted humans. Participants also revealed similarity in learning an interaction partner’s trustworthiness, independent of whether the partner was human or avatar. However, the neuroimaging findings revealed differential responses within the brain network that is associated with theory of mind (mentalizing) depending on the interaction partner. Based on these results, the major conclusion of our study is that, in a situation of a computer with human-like characteristics (avatar), trust behavior in human-computer interaction resembles that of human-human interaction. On a deeper neurobiological level, our study reveals that thinking about an interaction partner’s trustworthiness activates the mentalizing network more strongly if the trustee is a human rather than an avatar. We discuss implications of these findings for future research.",
"title": ""
},
{
"docid": "neg:1840470_15",
"text": "This paper shows that several sorts of expressions cannot be interpreted metaphorically, including determiners, tenses, etc. Generally, functional categories cannot be interpreted metaphorically, while lexical categories can. This reveals a semantic property of functional categories, and it shows that metaphor can be used as a probe for investigating them. It also reveals an important linguistic constraint on metaphor. The paper argues this constraint applies to the interface between the cognitive systems for language and metaphor. However, the constraint does not completely prevent structural elements of language from being available to the metaphor system. The paper shows that linguistic structure within the lexicon, specifically, aspectual structure, is available to the metaphor system. This paper takes as its starting point an observation about which sorts of expressions can receive metaphorical interpretations. Surprisingly, there are a number of expressions that cannot be interpreted metaphorically. Quantifier expressions (i.e. determiners) provide a good example. Consider a richly metaphorical sentence like: (1) Read o’er the volume of young Paris’ face, And find delight writ there with beauty’s pen; Examine every married lineament (Romeo and Juliet I.3). Metaphor and Lexical Semantics 2 In appreciating Shakespeare’s lovely use of language, writ and pen are obviously understood metaphorically, and married lineament must be too. (The meanings listed in the Oxford English Dictionary for lineament include diagram, portion of a body, and portion of the face viewed with respect to its outline.) In spite of all this rich metaphor, every means simply every, in its usual literal form. Indeed, we cannot think of what a metaphorical interpretation of every would be. As we will see, this is not an isolated case: while many expressions can be interpreted metaphorically, there is a broad and important group of expressions that cannot. Much of this paper will be devoted to exploring the significance of this observation. It shows us something about metaphor. In particular, it shows that there is a non-trivial linguistic constraint on metaphor. This is a somewhat surprising result, as one of the leading ideas in the theory of metaphor is that metaphor comprehension is an aspect of our more general cognitive abilities, and not tied to the specific structure of language. The constraint on metaphor also shows us something about linguistic meaning. We will see that the class of expressions that fail to have metaphorical interpretations is a linguistically important one. Linguistic items are often grouped into two classes: lexical categories, including nouns, verbs, etc., and functional categories, including determiners (quantifier expressions), tenses, etc. Generally, we will see that lexical categories can have metaphorical interpretations, while functional ones cannot. This reveals something about the kinds of semantic properties these expressions can have. It also shows that we can use the availability of metaphorical interpretation as a kind of probe, to help distinguish these sorts of categories. Functional categories are often described as ‘structural elements’ of language. They are the ‘linguistic glue’ that holds sentences together, and so, their expressions are described as being semantically ‘thin’. Our metaphor probe will give some substance to this (often very rough-andready) idea. But it raises the question of whether all such structural elements in language—anything we can describe as ‘linguistic glue’— are invisible when it comes to metaphorical interpretation. We will see that this is not so. In particular, we will see that linguistic structure that can be found within lexical items may be available to metaphorical interpretation. This paper will show specifically that so-called aspecVol. 3: A Figure of Speech",
"title": ""
},
{
"docid": "neg:1840470_16",
"text": "BACKGROUND\nSubacromial impingement syndrome (SAIS) is a painful condition resulting from the entrapment of anatomical structures between the anteroinferior corner of the acromion and the greater tuberosity of the humerus.\n\n\nOBJECTIVE\nThe aim of this study was to evaluate the short-term effectiveness of high-intensity laser therapy (HILT) versus ultrasound (US) therapy in the treatment of SAIS.\n\n\nDESIGN\nThe study was designed as a randomized clinical trial.\n\n\nSETTING\nThe study was conducted in a university hospital.\n\n\nPATIENTS\nSeventy patients with SAIS were randomly assigned to a HILT group or a US therapy group.\n\n\nINTERVENTION\nStudy participants received 10 treatment sessions of HILT or US therapy over a period of 2 consecutive weeks.\n\n\nMEASUREMENTS\nOutcome measures were the Constant-Murley Scale (CMS), a visual analog scale (VAS), and the Simple Shoulder Test (SST).\n\n\nRESULTS\nFor the 70 study participants (42 women and 28 men; mean [SD] age=54.1 years [9.0]; mean [SD] VAS score at baseline=6.4 [1.7]), there were no between-group differences at baseline in VAS, CMS, and SST scores. At the end of the 2-week intervention, participants in the HILT group showed a significantly greater decrease in pain than participants in the US therapy group. Statistically significant differences in change in pain, articular movement, functionality, and muscle strength (force-generating capacity) (VAS, CMS, and SST scores) were observed after 10 treatment sessions from the baseline for participants in the HILT group compared with participants in the US therapy group. In particular, only the difference in change of VAS score between groups (1.65 points) surpassed the accepted minimal clinically important difference for this tool.\n\n\nLIMITATIONS\nThis study was limited by sample size, lack of a control or placebo group, and follow-up period.\n\n\nCONCLUSIONS\nParticipants diagnosed with SAIS showed greater reduction in pain and improvement in articular movement functionality and muscle strength of the affected shoulder after 10 treatment sessions of HILT than did participants receiving US therapy over a period of 2 consecutive weeks.",
"title": ""
},
{
"docid": "neg:1840470_17",
"text": "Sudhausia aristotokia n. gen., n. sp. and S. crassa n. gen., n. sp. (Nematoda: Diplogastridae): viviparous new species with precocious gonad development Matthias HERRMANN 1, Erik J. RAGSDALE 1, Natsumi KANZAKI 2 and Ralf J. SOMMER 1,∗ 1 Max Planck Institute for Developmental Biology, Department of Evolutionary Biology, Spemannstraße 37, Tübingen, Germany 2 Forest Pathology Laboratory, Forestry and Forest Products Research Institute, 1 Matsunosato, Tsukuba, Ibaraki 305-8687, Japan",
"title": ""
},
{
"docid": "neg:1840470_18",
"text": "The performance of a circular patch antenna with slotted ground plane for body centric communication mainly in the health care monitoring systems for Onbody application is researched. The CP antenna is intended for utilization in UWB, body centric communication applications i.e. in between 3.1 to 10.6 GHz. The proposed antenna is CP antenna of (30 x 30 x 1.6) mm. It is simulated via CST microwave studio suite. This CP antenna covers the entire ultra wide frequency range (3.9174-13.519) GHz (9.6016) GHz with the VSWR of (3.818 GHz13.268 GHz). Antenna’s group delay is to be observed as 3.5 ns. The simulated results of antenna are given in terms of , VSWR, group delay and radiation pattern. Keywords— UWB, Body Worn Antenna, BodyCentric Communication.",
"title": ""
}
] |
1840471 | Deep Learning for Automated Quality Assessment of Color Fundus Images in Diabetic Retinopathy Screening | [
{
"docid": "pos:1840471_0",
"text": "Reliable verification of image quality of retinal screening images is a prerequisite for the development of automatic screening systems for diabetic retinopathy. A system is presented that can automatically determine whether the quality of a retinal screening image is sufficient for automatic analysis. The system is based on the assumption that an image of sufficient quality should contain particular image structures according to a certain pre-defined distribution. We cluster filterbank response vectors to obtain a compact representation of the image structures found within an image. Using this compact representation together with raw histograms of the R, G, and B color planes, a statistical classifier is trained to distinguish normal from low quality images. The presented system does not require any previous segmentation of the image in contrast with previous work. The system was evaluated on a large, representative set of 1000 images obtained in a screening program. The proposed method, using different feature sets and classifiers, was compared with the ratings of a second human observer. The best system, based on a Support Vector Machine, has performance close to optimal with an area under the ROC curve of 0.9968.",
"title": ""
}
] | [
{
"docid": "neg:1840471_0",
"text": "Exercise-induced ST-segment elevation was correlated with myocardial perfusion abnormalities and coronary artery obstruction in 35 patients. Ten patients (group 1) developed exercise ST elevation in leads without Q waves on the resting ECG. The site of ST elevation corresponded to both a reversible perfusion defect and a severely obstructed coronary artery. Associated ST-segment depression in other leads occurred in seven patients, but only one had a second perfusion defect at the site of ST depression. In three of the 10 patients, abnormal left ventricular wall motion at the site of exercise-induced ST elevation was demonstrated by ventriculography. Twenty-five patients (group 2) developed exercise ST elevation in leads with Q waves on the resting ECG. The site ofST elevation corresponded to severe coronary artery stenosis and a thallium perfusion defect that persisted on the 4-hour scan (constant in 12 patients, decreased in 13). Associated ST depression in other leads occurred in 11 patients and eight (73%) had a second perfusion defect at the site of ST depression. In all 25 patients with previous transmural infarction, abnormal left ventricular wall motion at the site of the Q waves was shown by ventriculography. In patients without previous myocardial infarction, the site of exercise-induced ST-segment elevation indicates the site of severe transient myocardial ischemia, and associated ST depression is usually reciprocal. In patients with Q waves on the resting ECG, exercise ST elevation way be due to peri-infarctional ischemia, abnormal ventricular wall motion or both. Exercise ST-segment depression may be due to a second area of myocardial ischemia rather than being reciprocal to ST elevation.",
"title": ""
},
{
"docid": "neg:1840471_1",
"text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.",
"title": ""
},
{
"docid": "neg:1840471_2",
"text": "For smart grid execution, one of the most important requirements is fast, precise, and efficient synchronized measurements, which are possible by phasor measurement unit (PMU). To achieve fully observable network with the least number of PMUs, optimal placement of PMU (OPP) is crucial. In trying to achieve OPP, priority may be given at critical buses, generator buses, or buses that are meant for future extension. Also, different applications will have to be kept in view while prioritizing PMU placement. Hence, OPP with multiple solutions (MSs) can offer better flexibility for different placement strategies as it can meet the best solution based on the requirements. To provide MSs, an effective exponential binary particle swarm optimization (EBPSO) algorithm is developed. In this algorithm, a nonlinear inertia-weight-coefficient is used to improve the searching capability. To incorporate previous position of particle, two innovative mathematical equations that can update particle's position are formulated. For quick and reliable convergence, two useful filtration techniques that can facilitate MSs are applied. Single mutation operator is conditionally applied to avoid stagnation. The EBPSO algorithm is so developed that it can provide MSs for various practical contingencies, such as single PMU outage and single line outage for different systems.",
"title": ""
},
{
"docid": "neg:1840471_3",
"text": "This paper describes a least squares (LS) channel estimation scheme for multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems based on pilot tones. We first compute the mean square error (MSE) of the LS channel estimate. We then derive optimal pilot sequences and optimal placement of the pilot tones with respect to this MSE. It is shown that the optimal pilot sequences are equipowered, equispaced, and phase shift orthogonal. To reduce the training overhead, an LS channel estimation scheme over multiple OFDM symbols is also discussed. Moreover, to enhance channel estimation, a recursive LS (RLS) algorithm is proposed, for which we derive the optimal forgetting or tracking factor. This factor is found to be a function of both the noise variance and the channel Doppler spread. Through simulations, it is shown that the optimal pilot sequences derived in this paper outperform both the orthogonal and random pilot sequences. It is also shown that a considerable gain in signal-to-noise ratio (SNR) can be obtained by using the RLS algorithm, especially in slowly time-varying channels.",
"title": ""
},
{
"docid": "neg:1840471_4",
"text": "In this thesis, the multi-category dataset has been incorporated with the robust feature descriptor using the scale invariant feature transform (SIFT), SURF and FREAK along with the multi-category enabled support vector machine (mSVM). The multi-category support vector machine (mSVM) has been designed with the iterative phases to make it able to work with the multi-category dataset. The mSVM represents the training samples of main class as the primary class in every iterative phase and all other training samples are categorized as the secondary class for the support vector machine classification. The proposed model is made capable of working with the variations in the indoor scene image dataset, which are noticed in the form of the color, texture, light, image orientation, occlusion and color illuminations. Several experiments have been conducted over the proposed model for the performance evaluation of the indoor scene recognition system in the proposed model. The results of the proposed model have been obtained in the form of the various performance parameters of statistical errors, precision, recall, F1-measure and overall accuracy. The proposed model has clearly outperformed the existing models in the terms of the overall accuracy. The proposed model improvement has been recorded higher than ten percent for all of the evaluated parameters against the existing models based upon SURF, FREAK, etc.",
"title": ""
},
{
"docid": "neg:1840471_5",
"text": "We investigate methods for combining multiple selfsupervised tasks—i.e., supervised tasks where data can be collected without manual labeling—in order to train a single visual representation. First, we provide an apples-toapples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for “harmonizing” network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks—even via a na¨ýve multihead architecture—always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.",
"title": ""
},
{
"docid": "neg:1840471_6",
"text": "Despite the many solutions proposed by industry and the research community to address phishing attacks, this problem continues to cause enormous damage. Because of our inability to deter phishing attacks, the research community needs to develop new approaches to anti-phishing solutions. Most of today's anti-phishing technologies focus on automatically detecting and preventing phishing attacks. While automation makes anti-phishing tools user-friendly, automation also makes them suffer from false positives, false negatives, and various practical hurdles. As a result, attackers often find simple ways to escape automatic detection.\n This paper presents iTrustPage - an anti-phishing tool that does not rely completely on automation to detect phishing. Instead, iTrustPage relies on user input and external repositories of information to prevent users from filling out phishing Web forms. With iTrustPage, users help to decide whether or not a Web page is legitimate. Because iTrustPage is user-assisted, iTrustPage avoids the false positives and the false negatives associated with automatic phishing detection. We implemented iTrustPage as a downloadable extension to FireFox. After being featured on the Mozilla website for FireFox extensions, iTrustPage was downloaded by more than 5,000 users in a two week period. We present an analysis of our tool's effectiveness and ease of use based on our examination of usage logs collected from the 2,050 users who used iTrustPage for more than two weeks. Based on these logs, we find that iTrustPage disrupts users on fewer than 2% of the pages they visit, and the number of disruptions decreases over time.",
"title": ""
},
{
"docid": "neg:1840471_7",
"text": "In this paper, we exploit a new multi-country historical dataset on public (government) debt to search for a systemic relationship between high public debt levels, growth and inflation. Our main result is that whereas the link between growth and debt seems relatively weak at “normal” debt levels, median growth rates for countries with public debt over roughly 90 percent of GDP are about one percent lower than otherwise; average (mean) growth rates are several percent lower. Surprisingly, the relationship between public debt and growth is remarkably similar across emerging markets and advanced economies. This is not the case for inflation. We find no systematic relationship between high debt levels and inflation for advanced economies as a group (albeit with individual country exceptions including the United States). By contrast, in emerging market countries, high public debt levels coincide with higher inflation. Our topic would seem to be a timely one. Public debt has been soaring in the wake of the recent global financial maelstrom, especially in the epicenter countries. This should not be surprising, given the experience of earlier severe financial crises. Outsized deficits and epic bank bailouts may be useful in fighting a downturn, but what is the long-run macroeconomic impact,",
"title": ""
},
{
"docid": "neg:1840471_8",
"text": "We propose a quasi real-time method for discrimination of ventricular ectopic beats from both supraventricular and paced beats in the electrocardiogram (ECG). The heartbeat waveforms were evaluated within a fixed-length window around the fiducial points (100 ms before, 450 ms after). Our algorithm was designed to operate with minimal expert intervention and we define that the operator is required only to initially select up to three ‘normal’ heartbeats (the most frequently seen supraventricular or paced complexes). These were named original QRS templates and their copies were substituted continuously throughout the ECG analysis to capture slight variations in the heartbeat waveforms of the patient’s sustained rhythm. The method is based on matching of the evaluated heartbeat with the QRS templates by a complex set of ECG descriptors, including maximal cross-correlation, area difference and frequency spectrum difference. Temporal features were added by analyzing the R-R intervals. The classification criteria were trained by statistical assessment of the ECG descriptors calculated for all heartbeats in MIT-BIH Supraventricular Arrhythmia Database. The performance of the classifiers was tested on the independent MIT-BIH Arrhythmia Database. The achieved unbiased accuracy is represented by sensitivity of 98.4% and specificity of 98.86%, both being competitive to other published studies. The provided computationally efficient techniques enable the fast post-recording analysis of lengthy Holter-monitor ECG recordings, as well as they can serve as a quasi real-time detection method embedded into surface ECG monitors.",
"title": ""
},
{
"docid": "neg:1840471_9",
"text": "In this paper we briefly address DLR’s (German Aerospace Center) background in space robotics by hand of corresponding milestone projects including systems on the International Space Station. We then discuss the key technologies needed for the development of an artificial “robonaut” generation with mechatronic ultra-lightweight arms and multifingered hands. The third arm generation is nearly finished now, approaching the limits of what is technologically achievable today with respect to light weight and power losses. In a similar way DLR’s second generation of artificial four-fingered hands was a big step towards higher reliability, manipulability and overall",
"title": ""
},
{
"docid": "neg:1840471_10",
"text": "C utaneous metastases rarely develop in patients having cancer with solid tumors. The reported incidence of cutaneous metastases from a known primary malignancy ranges from 0.6% to 9%, usually appearing 2 to 3 years after the initial diagnosis.1-11 Skin metastases may represent the first sign of extranodal disease in 7.6% of patients with a primary oncologic diagnosis.1 Cutaneous metastases may also be the first sign of recurrent disease after treatment, with 75% of patients also having visceral metastases.2 Infrequently, cutaneous metastases may be seen as the primary manifestation of an undiagnosed malignancy.12 Prompt recognition of such tumors can be of great significance, affecting prognosis and management. The initial presentation of cutaneous metastases is frequently subtle and may be overlooked without proper index of suspicion, appearing as multiple or single nodules, plaques, and ulcers, in decreasing order of frequency. Commonly, a painless, mobile, erythematous papule is initially noted, which may enlarge to an inflammatory nodule over time.8 Such lesions may be misdiagnosed as cysts, lipomas, fibromas, or appendageal tumors. Clinical features of cutaneous metastases rarely provide information regarding the primary tumor, although the location of the tumor may be helpful because cutaneous metastases typically manifest in the same geographic region as the initial cancer. The most common primary tumors seen with cutaneous metastases are melanoma, breast, and squamous cell carcinoma of the head and neck.1 Cutaneous metastases are often firm, because of dermal or lymphatic involvement, or erythematous. These features may help rule out some nonvascular entities in the differential diagnosis (eg, cysts and fibromas). The presence of pigment most commonly correlates with cutaneous metastases from melanoma. Given the limited body of knowledge regarding distinct clinical findings, we sought to better elucidate the dermoscopic patterns of cutaneous metastases, with the goal of using this diagnostic tool to help identify these lesions. We describe 20 outpatients with biopsy-proven cutaneous metastases secondary to various underlying primary malignancies. Their clinical presentation is reviewed, emphasizing the dermoscopic findings, as well as the histopathologic correlation.",
"title": ""
},
{
"docid": "neg:1840471_11",
"text": "Interest in business intelligence and analytics education has begun to attract IS scholars’ attention. In order to discover new research questions, there is a need for conducting a literature review of extant studies on BI&A education. This study identified 44 research papers through using Google Scholar related to BI&A education. This research contributes to the field of BI&A education by (a) categorizing the existing studies on BI&A education into the key five research foci, and (b) identifying the research gaps and providing the guide for future BI&A and IS research.",
"title": ""
},
{
"docid": "neg:1840471_12",
"text": "The use of rumble strips on roads can provide drivers lane departure warning (LDW). However, rumble strips require an infrastructure and do not exist on a majority of roadways. Therefore, it is very desirable to have an effective in-vehicle LDW system to detect when the driver is in danger of departing the road and then triggers an alarm to warn the driver early enough to take corrective action. This paper presents the development of an image-based LDW system using the Lucas-Kanade (L-K) optical flow and the Hough transform methods. Our approach integrates both techniques to establish an operation algorithm to determine whether a warning signal should be issued based on the status of the vehicle deviating from its heading lane. The L-K optical flow tracking is used when the lane boundaries cannot be detected, while the lane detection technique is used when they become available. Even though both techniques are used in the system, only one method is activated at any given time because each technique has its own advantages and also disadvantages. The developed LDW system was road tested on several rural highways and also one section of the interstate I35 freeway. Overall, the system operates correctly as expected with a false alarm occurred only roughly about 1.18% of the operation time. This paper presents the system implementation together with our findings. Key-Words: Lane departure warning, Lucas-Kanade optical flow, Hough transform.",
"title": ""
},
{
"docid": "neg:1840471_13",
"text": "This free executive summary is provided by the National Academies as part of our mission to educate the world on issues of science, engineering, and health. If you are interested in reading the full book, please visit us online at http://www.nap.edu/catalog/9728.html . You may browse and search the full, authoritative version for free; you may also purchase a print or electronic version of the book. If you have questions or just want more information about the books published by the National Academies Press, please contact our customer service department toll-free at 888-624-8373.",
"title": ""
},
{
"docid": "neg:1840471_14",
"text": "Recurrent neural networks (RNNs) have drawn interest from machine learning researchers because of their effectiveness at preserving past inputs for time-varying data processing tasks. To understand the success and limitations of RNNs, it is critical that we advance our analysis of their fundamental memory properties. We focus on echo state networks (ESNs), which are RNNs with simple memoryless nodes and random connectivity. In most existing analyses, the short-term memory (STM) capacity results conclude that the ESN network size must scale linearly with the input size for unstructured inputs. The main contribution of this paper is to provide general results characterizing the STM capacity for linear ESNs with multidimensional input streams when the inputs have common low-dimensional structure: sparsity in a basis or significant statistical dependence between inputs. In both cases, we show that the number of nodes in the network must scale linearly with the information rate and poly-logarithmically with the input dimension. The analysis relies on advanced applications of random matrix theory and results in explicit non-asymptotic bounds on the recovery error. Taken together, this analysis provides a significant step forward in our understanding of the STM properties in RNNs.",
"title": ""
},
{
"docid": "neg:1840471_15",
"text": "Unhealthy lifestyle behaviour is driving an increase in the burden of chronic non-communicable diseases worldwide. Recent evidence suggests that poor diet and a lack of exercise contribute to the genesis and course of depression. While studies examining dietary improvement as a treatment strategy in depression are lacking, epidemiological evidence clearly points to diet quality being of importance to the risk of depression. Exercise has been shown to be an effective treatment strategy for depression, but this is not reflected in treatment guidelines, and increased physical activity is not routinely encouraged when managing depression in clinical practice. Recommendations regarding dietary improvement, increases in physical activity and smoking cessation should be routinely given to patients with depression. Specialised and detailed advice may not be necessary. Recommendations should focus on following national guidelines for healthy eating and physical activity.",
"title": ""
},
{
"docid": "neg:1840471_16",
"text": "In 1994, nutritional facts panels became mandatory for processed foods to improve consumer access to nutritional information and to promote healthy food choices. Recent applied work is reviewed here in terms of how consumers value and respond to nutritional labels. We first summarize the health and nutritional links found in the literature and frame this discussion in terms of the obesity policy debate. Second, we discuss several approaches that have been used to empirically investigate consumer responses to nutritional labels: (a) surveys, (b) nonexperimental approaches utilizing revealed preferences, and (c) experimentbased approaches. We conclude with a discussion and suggest avenues of future research. INTRODUCTION How the provision of nutritional information affects consumers’ food choices and whether consumers value nutritional information are particularly pertinent questions in a country where obesity is pervasive. Firms typically have more information about the quality of their products than do consumers, creating a situation of asymmetric information. It is prohibitively costly for most consumers to acquire nutritional information independently of firms. Firms can use this Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 1 of 30 information to signal their quality and to receive quality premiums. However, firms that sell less nutritious products prefer to omit nutritional information. In this market setting, firms may not have an incentive to fully reveal their product quality, may try to highlight certain attributes in their advertising claims while shrouding others (Gabaix & Laibson 2006), or may provide information in a less salient fashion (Chetty et al. 2007). Mandatory nutritional labeling can fill this void of information provision by correcting asymmetric information and transforming an experience-good or a credence-good characteristic into search-good characteristics (Caswell & Mojduszka 1996). Golan et al. (2000) argue that the effectiveness of food labeling depends on firms’ incentives for information provision, government information requirements, and the role of third-party entities in standardizing and certifying the accuracy of the information. Yet nutritional information is valuable only if consumers use it in some fashion. Early advances in consumer choice theory, such as market goods possessing desirable characteristics (Lancaster 1966) or market goods used in conjunction with time to produce desirable commodities (Becker 1965), set the theoretical foundation for studying how market prices, household characteristics, incomes, nutrient content, and taste considerations interact with and influence consumer choice. LaFrance (1983) develops a theoretical framework and estimates the marginal value of nutrient versus taste parameters in an analytical approach that imposes a sufficient degree of restrictions to generality to be empirically feasible. Real or perceived tradeoffs between nutritional and taste or pleasure considerations imply that consumers will not necessarily make healthier choices. Reduced search costs mean that consumers can more easily make choices that maximize their utility. Foster & Just (1989) provide a framework in which to analyze the effect of information on consumer choice and welfare in this context. They argue that Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 2 of 30 when consumers are uncertain about product quality, the provision of information can help to better align choices with consumer preferences. However, consumers may not use nutritional labels because consumers still require time and effort to process the information. Reading a nutritional facts panel (NFP), for instance, necessitates that the consumer remove the product from the shelf and turn the product to read the nutritional information on the back or side. In addition, consumers often have difficulty evaluating the information provided on the NFP or how to relate it to a healthy diet. Berning et al. (2008) present a simple model of demand for nutritional information. The consumer chooses to consume goods and information to maximize utility subject to budget and time constraints, which include time to acquire and to process nutritional information. Consumers who have strong preferences for nutritional content will acquire more nutritional information. Alternatively, other consumers may derive more utility from appearance or taste. Following Becker & Murphy (1993), Berning et al. show that nutritional information may act as a complement to the consumption of products with unknown nutritional quality, similar to the way advertisements complement advertised goods. From a policy perspective, the rise in the U.S. obesity rate coupled with the asymmetry of information have resulted in changes in the regulatory environment. The U.S. Food and Drug Administration (FDA) is currently considering a change to the format and content of nutritional labels, originally implemented in 1994 to promote increased label use. Consumers’ general understanding of the link between food consumption and health, and widespread interest in the provision of nutritional information on food labels, is documented in the existing literature (e.g., Williams 2005, Grunert & Wills 2007). Yet only approximately half Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 3 of 30 of consumers claim to use NFPs when making food purchasing decisions (Blitstein & Evans 2006). Moreover, self-reported consumer use of nutritional labels has declined from 1995 to 2006, with the largest decline for younger age groups (20–29 years) and less educated consumers (Todd & Variyam 2008). This decline supports research findings that consumers prefer for short front label claims over the NFP’s lengthy back label explanations (e.g., Levy & Fein 1998, Wansink et al. 2004, Williams 2005, Grunert & Wills 2007). Furthermore, regulatory rules and enforcement policies may have induced firms to move away from reinforcing nutritional claims through advertising (e.g., Ippolito & Pappalardo 2002). Finally, critical media coverage of regulatory challenges (e.g., Nestle 2000) may have contributed to decreased labeling usage over time. Excellent review papers on this topic preceded and inspired this present review (e.g., Baltas 2001, Williams 2005, Drichoutis et al. 2006). In particular, Drichoutis et al. (2006) reviews the nutritional labeling literature and addresses specific issues regarding the determinants of label use, the debate on mandatory labeling, label formats preferred by consumers, and the effect of nutritional label use on purchase and dietary behavior. The current review article updates and complements these earlier reviews by focusing on recent work and highlighting major contributions in applied analyses on how consumers value, utilize, and respond to nutritional labels. We first cover the health and nutritional aspects of consumer food choices found in the literature to frame the discussion on nutritional labels in the context of the recent debate on obesity prevention policies. Second, we discuss the different empirical approaches that are utilized to investigate consumers’ response to and valuation of nutritional labels, classifying existing work into three categories according to the empirical strategy and data sources. First, we present findings based on consumer surveys and stated consumer responses to Publisher: ANNUALREVIEWS; Journal: ARRE: Annual Review of Resource Economics; Copyright: Volume: 3; Issue: 0; Manuscript: 3_McCluskey; Month: ; Year: 2011 DOI: ; TOC Head: ; Section Head: ; Article Type: REVIEW ARTICLE Page 4 of 30 labels. The second set of articles reviewed utilizes nonexperimental data and focuses on estimating consumer valuation of labels on the basis of revealed preferences. Here, the empirical strategy is structural, using hedonic methods, structural demand analyses, or discrete choice models and allowing for estimation of consumers’ willingness to pay (WTP) for nutritional information. The last set of empirical contributions discussed is based on experimental data, differentiating market-level and natural experiments from laboratory evidence. These studies employ mainly reduced-form approaches. Finally, we conclude with a discussion of avenues for future research. CONSUMER FOOD DEMAND, NUTRITIONAL LABELS, AND OBESITY PREVENTION The U.S. Department of Health and Public Services declared the reduction of obesity rates to less than 15% to be one of the national health objectives for 2010, yet in 2009 no state met these targets, with only two states reporting obesity rates less than 20% (CDC 2010). Researchers have studied and identified many contributing factors, such as the decreasing relative price of caloriedense food (Chou et al. 2004) and marketing practices that took advantage of behavioral reactions to food (Smith 2004). Other researchers argue that an increased prevalence of fast food (Cutler et al. 2003) and increased portion sizes in restaurants and at home (Wansink & van Ittersum 2007) may be the driving factors of increased food consumption. In addition, food psychologists have focused on changes in the eating environment, pointing to distractions such as television, books, conversation with others, or preoccupation with work as leading to increased food intake (Wansink 2004). Although each of these factors potentially contributes to the obesity epidemic, they do not necessarily mean that consumers wi",
"title": ""
},
{
"docid": "neg:1840471_17",
"text": "This paper presents a novel neural machine translation model which jointly learns translation and source-side latent graph representations of sentences. Unlike existing pipelined approaches using syntactic parsers, our end-to-end model learns a latent graph parser as part of the encoder of an attention-based neural machine translation model, and thus the parser is optimized according to the translation objective. In experiments, we first show that our model compares favorably with state-of-the-art sequential and pipelined syntax-based NMT models. We also show that the performance of our model can be further improved by pretraining it with a small amount of treebank annotations. Our final ensemble model significantly outperforms the previous best models on the standard Englishto-Japanese translation dataset.",
"title": ""
},
{
"docid": "neg:1840471_18",
"text": "The problem of detecting community structures of a social network has been extensively studied over recent years, but most existing methods solely rely on the network structure and neglect the context information of the social relations. The main reason is that a context-rich network offers too much flexibility and complexity for automatic or manual modulation of the multifaceted context in the analysis process. We address the challenging problem of incorporating context information into the community analysis with a novel visual analysis mechanism. Our approach consists of two stages: interactive discovery of salient context, and iterative context-guided community detection. Central to the analysis process is a context relevance model (CRM) that visually characterizes the influence of a given set of contexts on the variation of the detected communities, and discloses the community structure in specific context configurations. The extracted relevance is used to drive an iterative visual reasoning process, in which the community structures are progressively discovered. We introduce a suite of visual representations to encode the community structures, the context as well as the CRM. In particular, we propose an enhanced parallel coordinates representation to depict the context and community structures, which allows for interactive data exploration and community investigation. Case studies on several datasets demonstrate the efficiency and accuracy of our approach.",
"title": ""
},
{
"docid": "neg:1840471_19",
"text": "This paper proposes a long short-term memory recurrent neural network (LSTM-RNN) for extracting melody and simultaneously detecting regions of melody from polyphonic audio using the proposed harmonic sum loss. The previous state-of-the-art algorithms have not been based on machine learning techniques and certainly not on deep architectures. The harmonics structure in melody is incorporated in the loss function to attain robustness against both octave mismatch and interference from background music. Experimental results show that the performance of the proposed method is better than or comparable to other state-of-the-art algorithms.",
"title": ""
}
] |
1840472 | Hierarchical target type identification for entity-oriented queries | [
{
"docid": "pos:1840472_0",
"text": "This paper addresses the problem of Named Entity Recognition in Query (NERQ), which involves detection of the named entity in a given query and classification of the named entity into predefined classes. NERQ is potentially useful in many applications in web search. The paper proposes taking a probabilistic approach to the task using query log data and Latent Dirichlet Allocation. We consider contexts of a named entity (i.e., the remainders of the named entity in queries) as words of a document, and classes of the named entity as topics. The topic model is constructed by a novel and general learning method referred to as WS-LDA (Weakly Supervised Latent Dirichlet Allocation), which employs weakly supervised learning (rather than unsupervised learning) using partially labeled seed entities. Experimental results show that the proposed method based on WS-LDA can accurately perform NERQ, and outperform the baseline methods.",
"title": ""
},
{
"docid": "pos:1840472_1",
"text": "In this paper, we propose a novel unsupervised approach to query segmentation, an important task in Web search. We use a generative query model to recover a query's underlying concepts that compose its original segmented form. The model's parameters are estimated using an expectation-maximization (EM) algorithm, optimizing the minimum description length objective function on a partial corpus that is specific to the query. To augment this unsupervised learning, we incorporate evidence from Wikipedia.\n Experiments show that our approach dramatically improves performance over the traditional approach that is based on mutual information, and produces comparable results with a supervised method. In particular, the basic generative language model contributes a 7.4% improvement over the mutual information based method (measured by segment F1 on the Intersection test set). EM optimization further improves the performance by 14.3%. Additional knowledge from Wikipedia provides another improvement of 24.3%, adding up to a total of 46% improvement (from 0.530 to 0.774).",
"title": ""
},
{
"docid": "pos:1840472_2",
"text": "The heterogeneous Web exacerbates IR problems and short user queries make them worse. The contents of web documents are not enough to find good answer documents. Link information and URL information compensates for the insufficiencies of content information. However, static combination of multiple evidences may lower the retrieval performance. We need different strategies to find target documents according to a query type. We can classify user queries as three categories, the topic relevance task, the homepage finding task, and the service finding task. In this paper, a user query classification scheme is proposed. This scheme uses the difference of distribution, mutual information, the usage rate as anchor texts, and the POS information for the classification. After we classified a user query, we apply different algorithms and information for the better results. For the topic relevance task, we emphasize the content information, on the other hand, for the homepage finding task, we emphasize the Link information and the URL information. We could get the best performance when our proposed classification method with the OKAPI scoring algorithm was used.",
"title": ""
}
] | [
{
"docid": "neg:1840472_0",
"text": "We contrast two theoretical approaches to social influence, one stressing interpersonal dependence, conceptualized as normative and informational influence (Deutsch & Gerard, 1955), and the other stressing group membership, conceptualized as self-categorization and referent informational influence (Turner, Hogg, Oakes, Reicher & Wetherell, 1987). We argue that both social comparisons to reduce uncertainty and the existence of normative pressure to comply depend on perceiving the source of influence as belonging to one's own category. This study tested these two approaches using three influence paradigms. First we demonstrate that, in Sherif's (1936) autokinetic effect paradigm, the impact of confederates on the formation of a norm decreases as their membership of a different category is made more salient to subjects. Second, in the Asch (1956) conformity paradigm, surveillance effectively exerts normative pressure if done by an in-group but not by an out-group. In-group influence decreases and out-group influence increases when subjects respond privately. Self-report data indicate that in-group confederates create more subjective uncertainty than out-group confederates and public responding seems to increase cohesiveness with in-group - but decrease it with out-group - sources of influence. In our third experiment we use the group polarization paradigm (e.g. Burnstein & Vinokur, 1973) to demonstrate that, when categorical differences between two subgroups within a discussion group are made salient, convergence of opinion between the subgroups is inhibited. Taken together the experiments show that self-categorization can be a crucial determining factor in social influence.",
"title": ""
},
{
"docid": "neg:1840472_1",
"text": "We use a low-dimensional linear model to describe the user rating matrix in a recommendation system. A non-negativity constraint is enforced in the linear model to ensure that each user’s rating profile can be represented as an additive linear combination of canonical coordinates. In order to learn such a constrained linear model from an incomplete rating matrix, we introduce two variations on Non-negative Matrix Factorization (NMF): one based on the Expectation-Maximization (EM) procedure and the other a Weighted Nonnegative Matrix Factorization (WNMF). Based on our experiments, the EM procedure converges well empirically and is less susceptible to the initial starting conditions than WNMF, but the latter is much more computationally efficient. Taking into account the advantages of both algorithms, a hybrid approach is presented and shown to be effective in real data sets. Overall, the NMF-based algorithms obtain the best prediction performance compared with other popular collaborative filtering algorithms in our experiments; the resulting linear models also contain useful patterns and features corresponding to user communities.",
"title": ""
},
{
"docid": "neg:1840472_2",
"text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.",
"title": ""
},
{
"docid": "neg:1840472_3",
"text": "We address two questions for training a convolutional neural network (CNN) for hyperspectral image classification: i) is it possible to build a pre-trained network? and ii) is the pretraining effective in furthering the performance? To answer the first question, we have devised an approach that pre-trains a network on multiple source datasets that differ in their hyperspectral characteristics and fine-tunes on a target dataset. This approach effectively resolves the architectural issue that arises when transferring meaningful information between the source and the target networks. To answer the second question, we carried out several ablation experiments. Based on the experimental results, a network trained from scratch performs as good as a network fine-tuned from a pre-trained network. However, we observed that pre-training the network has its own advantage in achieving better performances when deeper networks are required.",
"title": ""
},
{
"docid": "neg:1840472_4",
"text": "In just a few years, crowdsourcing markets like Mechanical Turk have become the dominant mechanism for for building \"gold standard\" datasets in areas of computer science ranging from natural language processing to audio transcription. The assumption behind this sea change - an assumption that is central to the approaches taken in hundreds of research projects - is that crowdsourced markets can accurately replicate the judgments of the general population for knowledge-oriented tasks. Focusing on the important domain of semantic relatedness algorithms and leveraging Clark's theory of common ground as a framework, we demonstrate that this assumption can be highly problematic. Using 7,921 semantic relatedness judgements from 72 scholars and 39 crowdworkers, we show that crowdworkers on Mechanical Turk produce significantly different semantic relatedness gold standard judgements than people from other communities. We also show that algorithms that perform well against Mechanical Turk gold standard datasets do significantly worse when evaluated against other communities' gold standards. Our results call into question the broad use of Mechanical Turk for the development of gold standard datasets and demonstrate the importance of understanding these datasets from a human-centered point-of-view. More generally, our findings problematize the notion that a universal gold standard dataset exists for all knowledge tasks.",
"title": ""
},
{
"docid": "neg:1840472_5",
"text": "Digital images are widely used and numerous application in different scientific fields use digital image processing algorithms where image segmentation is a common task. Thresholding represents one technique for solving that task and Kapur's and Otsu's methods are well known criteria often used for selecting thresholds. Finding optimal threshold values represents a hard optimization problem and swarm intelligence algorithms have been successfully used for solving such problems. In this paper we adjusted recent elephant herding optimization algorithm for multilevel thresholding by Kapur's and Otsu's method. Performance was tested on standard benchmark images and compared with four other swarm intelligence algorithms. Elephant herding optimization algorithm outperformed other approaches from literature and it was more robust.",
"title": ""
},
{
"docid": "neg:1840472_6",
"text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.",
"title": ""
},
{
"docid": "neg:1840472_7",
"text": "OBJECTIVES\nThis study was aimed to compare the effectiveness of aromatherapy and acupressure massage intervention strategies on the sleep quality and quality of life (QOL) in career women.\n\n\nDESIGN\nThe randomized controlled trial experimental design was used in the present study. One hundred and thirty-two career women (24-55 years) voluntarily participated in this study and they were randomly assigned to (1) placebo (distilled water), (2) lavender essential oil (Lavandula angustifolia), (3) blended essential oil (1:1:1 ratio of L. angustifolia, Salvia sclarea, and Origanum majorana), and (4) acupressure massage groups for a 4-week treatment. The Pittsburgh Sleep Quality Index and Short Form 36 Health Survey were used to evaluate the intervention effects at pre- and postintervention.\n\n\nRESULTS\nAfter a 4-week treatment, all experimental groups (blended essential oil, lavender essential oil, and acupressure massage) showed significant improvements in sleep quality and QOL (p < 0.05). Significantly greater improvement in QOL was observed in the participants with blended essential oil treatment compared with those with lavender essential oil (p < 0.05), and a significantly greater improvement in sleep quality was observed in the acupressure massage and blended essential oil groups compared with the lavender essential oil group (p < 0.05).\n\n\nCONCLUSIONS\nThe blended essential oil exhibited greater dual benefits on improving both QOL and sleep quality compared with the interventions of lavender essential oil and acupressure massage in career women. These results suggest that aromatherapy and acupressure massage improve the sleep and QOL and may serve as the optimal means for career women to improve their sleep and QOL.",
"title": ""
},
{
"docid": "neg:1840472_8",
"text": "Visual appearance score, appearance mixture type and deformation are three important information sources for human pose estimation. This paper proposes to build a multi-source deep model in order to extract non-linear representation from these different aspects of information sources. With the deep model, the global, high-order human body articulation patterns in these information sources are extracted for pose estimation. The task for estimating body locations and the task for human detection are jointly learned using a unified deep model. The proposed approach can be viewed as a post-processing of pose estimation results and can flexibly integrate with existing methods by taking their information sources as input. By extracting the non-linear representation from multiple information sources, the deep model outperforms state-of-the-art by up to 8.6 percent on three public benchmark datasets.",
"title": ""
},
{
"docid": "neg:1840472_9",
"text": "PURPOSE/OBJECTIVES\nTo better understand treatment-induced changes in sexuality from the patient perspective, to learn how women manage these changes in sexuality, and to identify what information they want from nurses about this symptom.\n\n\nRESEARCH APPROACH\nQualitative descriptive methods.\n\n\nSETTING\nAn outpatient gynecologic clinic in an urban area in the southeastern United States served as the recruitment site for patients.\n\n\nPARTICIPANTS\nEight women, ages 33-69, receiving first-line treatment for ovarian cancer participated in individual interviews. Five women, ages 40-75, participated in a focus group and their status ranged from newly diagnosed to terminally ill from ovarian cancer.\n\n\nMETHODOLOGIC APPROACH\nBoth individual interviews and a focus group were conducted. Content analysis was used to identify themes that described the experience of women as they became aware of changes in their sexuality. Triangulation of approach, the researchers, and theory allowed for a rich description of the symptom experience.\n\n\nFINDINGS\nRegardless of age, women reported that ovarian cancer treatment had a detrimental impact on their sexuality and that the changes made them feel \"no longer whole.\" Mechanical changes caused by surgery coupled with hormonal changes added to the intensity and dimension of the symptom experience. Physiologic, psychological, and social factors also impacted how this symptom was experienced.\n\n\nCONCLUSIONS\nRegardless of age or relationship status, sexuality is altered by the diagnosis and treatment of ovarian cancer.\n\n\nINTERPRETATION\nNurses have an obligation to educate women with ovarian cancer about anticipated changes in their sexuality that may come from treatment.",
"title": ""
},
{
"docid": "neg:1840472_10",
"text": "Context. Mobile web apps represent a large share of the Internet today. However, they still lag behind native apps in terms of user experience. Progressive Web Apps (PWAs) are a new technology introduced by Google that aims at bridging this gap, with a set of APIs known as service workers at its core. Goal. In this paper, we present an empirical study that evaluates the impact of service workers on the energy efficiency of PWAs, when operating in different network conditions on two different generations of mobile devices. Method. We designed an empirical experiment with two main factors: the use of service workers and the type of network available (2G or WiFi). We performed the experiment by running a total of 7 PWAs on two devices (an LG G2 and a Nexus 6P) that we evaluated as blocking factor. Our response variable is the energy consumption of the devices. Results. Our results show that service workers do not have a significant impact over the energy consumption of the two devices, regardless of the network conditions. Also, no interaction was detected between the two factors. However, some patterns in the data show different behaviors among PWAs. Conclusions. This paper represents a first empirical investigation on PWAs. Our results show that the PWA and service workers technology is promising in terms of energy efficiency.",
"title": ""
},
{
"docid": "neg:1840472_11",
"text": "The manufacturing, converting and ennobling processes of paper are truly large area and reel-to-reel processes. Here, we describe a project focusing on using the converting and ennobling processes of paper in order to introduce electronic functions onto the paper surface. As key active electronic materials we are using organic molecules and polymers. We develop sensor, communication and display devices on paper and the main application areas are packaging and paper display applications.",
"title": ""
},
{
"docid": "neg:1840472_12",
"text": "Classification is one of the most active research and application areas of neural networks. The literature is vast and growing. This paper summarizes the some of the most important developments in neural network classification research. Specifically, the issues of posterior probability estimation, the link between neural and conventional classifiers, learning and generalization tradeoff in classification, the feature variable selection, as well as the effect of misclassification costs are examined. Our purpose is to provide a synthesis of the published research in this area and stimulate further research interests and efforts in the identified topics.",
"title": ""
},
{
"docid": "neg:1840472_13",
"text": "The manner in which quadrupeds change their locomotive patterns—walking, trotting, and galloping—with changing speed is poorly understood. In this paper, we provide evidence for interlimb coordination during gait transitions using a quadruped robot for which coordination between the legs can be self-organized through a simple “central pattern generator” (CPG) model. We demonstrate spontaneous gait transitions between energy-efficient patterns by changing only the parameter related to speed. Interlimb coordination was achieved with the use of local load sensing only without any preprogrammed patterns. Our model exploits physical communication through the body, suggesting that knowledge of physical communication is required to understand the leg coordination mechanism in legged animals and to establish design principles for legged robots that can reproduce flexible and efficient locomotion.",
"title": ""
},
{
"docid": "neg:1840472_14",
"text": "We present a system for real-time general object recognition (gor) for indoor robot in complex scenes. A point cloud image containing the object to be recognized from a Kinect sensor, for general object at will, must be extracted a point cloud model of the object with the Cluster Extraction method, and then we can compute the global features of the object model, making up the model database after processing many frame images. Here the global feature we used is Clustered Viewpoint Feature Histogram (CVFH) feature from Point Cloud Library (PCL). For real-time gor we must preprocess all the point cloud images streamed from the Kinect into clusters based on a clustering threshold and the min-max cluster sizes related to the size of the model, for reducing the amount of the clusters and improving the processing speed, and also compute the CVFH features of the clusters. For every cluster of a frame image, we search the several nearer features from the model database with the KNN method in the feature space, and we just consider the nearest model. If the strings of the model name contain the strings of the object to be recognized, it can be considered that we have recognized the general object; otherwise, we compute another cluster again and perform the above steps. The experiments showed that we had achieved the real-time recognition, and ensured the speed and accuracy for the gor.",
"title": ""
},
{
"docid": "neg:1840472_15",
"text": "Two flavors of the recommendation problem are the explicit and the implicit feedback settings. In the explicit feedback case, users rate items and the user-item preference relationship can be modelled on the basis of the ratings. In the harder but more common implicit feedback case, the system has to infer user preferences from indirect information: presence or absence of events, such as a user viewed an item. One approach for handling implicit feedback is to minimize a ranking objective function instead of the conventional prediction mean squared error. The naive minimization of a ranking objective function is typically expensive. This difficulty is usually overcome by a trade-off: sacrificing the accuracy to some extent for computational efficiency by sampling the objective function. In this paper, we present a computationally effective approach for the direct minimization of a ranking objective function, without sampling. We demonstrate by experiments on the Y!Music and Netflix data sets that the proposed method outperforms other implicit feedback recommenders in many cases in terms of the ErrorRate, ARP and Recall evaluation metrics.",
"title": ""
},
{
"docid": "neg:1840472_16",
"text": "We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.",
"title": ""
},
{
"docid": "neg:1840472_17",
"text": "This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from Web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: “An android is a robot” vs. “Snowcap is unmistakable”. Domain and style independence is obtained thanks to the annotation of a sample of the Wikipedia corpus and to a novel pattern generalization algorithm based on wordclass lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness.",
"title": ""
},
{
"docid": "neg:1840472_18",
"text": "I interactive multimedia technologies enable online firms to employ a variety of formats to present and promote their products: They can use pictures, videos, and sounds to depict products, as well as give consumers the opportunity to try out products virtually. Despite the several previous endeavors that studied the effects of different product presentation formats, the functional mechanisms underlying these presentation methods have not been investigated in a comprehensive way. This paper investigates a model showing how these functional mechanisms (namely, vividness and interactivity) influence consumers’ intentions to return to a website and their intentions to purchase products. A study conducted to test this model has largely confirmed our expectations: (1) both vividness and interactivity of product presentations are the primary design features that influence the efficacy of the presentations; (2) consumers’ perceptions of the diagnosticity of websites, their perceptions of the compatibility between online shopping and physical shopping, and their shopping enjoyment derived from a particular online shopping experience jointly influence consumers’ attitudes toward shopping at a website; and (3) both consumers’ attitudes toward products and their attitudes toward shopping at a website contribute to their intentions to purchase the products displayed on the website.",
"title": ""
},
{
"docid": "neg:1840472_19",
"text": "Vertical integration refers to one of the options that firms make decisions in the supply of oligopoly market. It was impacted by competition game between upstream firms and downstream firms. Based on the game theory and other previous studies,this paper built a dynamic game model of two-stage competition between the oligopoly suppliers of upstream and the vertical integration firms of downstream manufacturers. In the first stage, it analyzed the influences on integration degree by prices of intermediate goods when an oligopoly firm engages in a Bertrand-game if outputs are not limited. Moreover, it analyzed the influences on integration degree by price-diverge of intermediate goods if outputs were not restricted within a Bertrand Duopoly game equilibrium. In the second stage, there is a Cournot duopoly game between downstream specialization firms and downstream integration firms. Their marginal costs are affected by the integration degree and their yields are affected either under indifferent manufacture conditions. Finally, prices of intermediate goods are determined by the competition of upstream firms, the prices of intermediate goods affect the changes of integration degree between upstream firms and downstream firms. The conclusions can be referenced to decision-making of integration in market competition.",
"title": ""
}
] |
1840473 | From archaeon to eukaryote: the evolutionary dark ages of the eukaryotic cell. | [
{
"docid": "pos:1840473_0",
"text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.",
"title": ""
}
] | [
{
"docid": "neg:1840473_0",
"text": "There is little evidence available on the use of robot-assisted therapy in subacute stroke patients. A randomized controlled trial was carried out to evaluate the short-time efficacy of intensive robot-assisted therapy compared to usual physical therapy performed in the early phase after stroke onset. Fifty-three subacute stroke patients at their first-ever stroke were enrolled 30 ± 7 days after the acute event and randomized into two groups, both exposed to standard therapy. Additional 30 sessions of robot-assisted therapy were provided to the Experimental Group. Additional 30 sessions of usual therapy were provided to the Control Group. The following impairment evaluations were performed at the beginning (T0), after 15 sessions (T1), and at the end of the treatment (T2): Fugl-Meyer Assessment Scale (FM), Modified Ashworth Scale-Shoulder (MAS-S), Modified Ashworth Scale-Elbow (MAS-E), Total Passive Range of Motion-Shoulder/Elbow (pROM), and Motricity Index (MI). Evidence of significant improvements in MAS-S (p = 0.004), MAS-E (p = 0.018) and pROM (p < 0.0001) was found in the Experimental Group. Significant improvement was demonstrated in both Experimental and Control Group in FM (EG: p < 0.0001, CG: p < 0.0001) and MI (EG: p < 0.0001, CG: p < 0.0001), with an higher improvement in the Experimental Group. Robot-assisted upper limb rehabilitation treatment can contribute to increasing motor recovery in subacute stroke patients. Focusing on the early phase of stroke recovery has a high potential impact in clinical practice.",
"title": ""
},
{
"docid": "neg:1840473_1",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "neg:1840473_2",
"text": "We propose a distributed, multi-camera video analysis paradigm for aiport security surveillance. We propose to use a new class of biometry signatures, which are called soft biometry including a person's height, built, skin tone, color of shirts and trousers, motion pattern, trajectory history, etc., to ID and track errant passengers and suspicious events without having to shut down a whole terminal building and cancel multiple flights. The proposed research is to enable the reliable acquisition, maintenance, and correspondence of soft biometry signatures in a coordinated manner from a large number of video streams for security surveillance. The intellectual merit of the proposed research is to address three important video analysis problems in a distributed, multi-camera surveillance network: sensor network calibration, peer-to-peer sensor data fusion, and stationary-dynamic cooperative camera sensing.",
"title": ""
},
{
"docid": "neg:1840473_3",
"text": "OBJECTIVE\nTo develop a clinical practice guideline for red blood cell transfusion in adult trauma and critical care.\n\n\nDESIGN\nMeetings, teleconferences and electronic-based communication to achieve grading of the published evidence, discussion and consensus among the entire committee members.\n\n\nMETHODS\nThis practice management guideline was developed by a joint taskforce of EAST (Eastern Association for Surgery of Trauma) and the American College of Critical Care Medicine (ACCM) of the Society of Critical Care Medicine (SCCM). We performed a comprehensive literature review of the topic and graded the evidence using scientific assessment methods employed by the Canadian and U.S. Preventive Task Force (Grading of Evidence, Class I, II, III; Grading of Recommendations, Level I, II, III). A list of guideline recommendations was compiled by the members of the guidelines committees for the two societies. Following an extensive review process by external reviewers, the final guideline manuscript was reviewed and approved by the EAST Board of Directors, the Board of Regents of the ACCM and the Council of SCCM.\n\n\nRESULTS\nKey recommendations are listed by category, including (A) Indications for RBC transfusion in the general critically ill patient; (B) RBC transfusion in sepsis; (C) RBC transfusion in patients at risk for or with acute lung injury and acute respiratory distress syndrome; (D) RBC transfusion in patients with neurologic injury and diseases; (E) RBC transfusion risks; (F) Alternatives to RBC transfusion; and (G) Strategies to reduce RBC transfusion.\n\n\nCONCLUSIONS\nEvidence-based recommendations regarding the use of RBC transfusion in adult trauma and critical care will provide important information to critical care practitioners.",
"title": ""
},
{
"docid": "neg:1840473_4",
"text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.",
"title": ""
},
{
"docid": "neg:1840473_5",
"text": "Clustering methods for data-mining problems must be extremely scalable. In addition, several data mining applications demand that the clusters obtained be balanced, i.e., of approximately the same size or importance. In this paper, we propose a general framework for scalable, balanced clustering. The data clustering process is broken down into three steps: sampling of a small representative subset of the points, clustering of the sampled data, and populating the initial clusters with the remaining data followed by refinements. First, we show that a simple uniform sampling from the original data is sufficient to get a representative subset with high probability. While the proposed framework allows a large class of algorithms to be used for clustering the sampled set, we focus on some popular parametric algorithms for ease of exposition. We then present algorithms to populate and refine the clusters. The algorithm for populating the clusters is based on a generalization of the stable marriage problem, whereas the refinement algorithm is a constrained iterative relocation scheme. The complexity of the overall method is O(kN log N) for obtaining k balanced clusters from N data points, which compares favorably with other existing techniques for balanced clustering. In addition to providing balancing guarantees, the clustering performance obtained using the proposed framework is comparable to and often better than the corresponding unconstrained solution. Experimental results on several datasets, including high-dimensional (>20,000) ones, are provided to demonstrate the efficacy of the proposed framework.",
"title": ""
},
{
"docid": "neg:1840473_6",
"text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.",
"title": ""
},
{
"docid": "neg:1840473_7",
"text": "This demo showcases Scythe, a novel query-by-example system that can synthesize expressive SQL queries from inputoutput examples. Scythe is designed to help end-users program SQL and explore data simply using input-output examples. From a web-browser, users can obtain SQL queries with Scythe in an automated, interactive fashion: from a provided example, Scythe synthesizes SQL queries and resolves ambiguities via conversations with the users. In this demo, we first show Scythe how end users can formulate queries using Scythe; we then switch to the perspective of an algorithm designer to show how Scythe can scale up to handle complex SQL features, like outer joins and subqueries.",
"title": ""
},
{
"docid": "neg:1840473_8",
"text": "We propose Quadruplet Convolutional Neural Networks (Quad-CNN) for multi-object tracking, which learn to associate object detections across frames using quadruplet losses. The proposed networks consider target appearances together with their temporal adjacencies for data association. Unlike conventional ranking losses, the quadruplet loss enforces an additional constraint that makes temporally adjacent detections more closely located than the ones with large temporal gaps. We also employ a multi-task loss to jointly learn object association and bounding box regression for better localization. The whole network is trained end-to-end. For tracking, the target association is performed by minimax label propagation using the metric learned from the proposed network. We evaluate performance of our multi-object tracking algorithm on public MOT Challenge datasets, and achieve outstanding results.",
"title": ""
},
{
"docid": "neg:1840473_9",
"text": "Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. C © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14–23 DOI: 10.1002/widm.8",
"title": ""
},
{
"docid": "neg:1840473_10",
"text": "Filamentous fungi can each produce dozens of secondary metabolites which are attractive as therapeutics, drugs, antimicrobials, flavour compounds and other high-value chemicals. Furthermore, they can be used as an expression system for eukaryotic proteins. Application of most fungal secondary metabolites is, however, so far hampered by the lack of suitable fermentation protocols for the producing strain and/or by low product titers. To overcome these limitations, we report here the engineering of the industrial fungus Aspergillus niger to produce high titers (up to 4,500 mg • l−1) of secondary metabolites belonging to the class of nonribosomal peptides. For a proof-of-concept study, we heterologously expressed the 351 kDa nonribosomal peptide synthetase ESYN from Fusarium oxysporum in A. niger. ESYN catalyzes the formation of cyclic depsipeptides of the enniatin family, which exhibit antimicrobial, antiviral and anticancer activities. The encoding gene esyn1 was put under control of a tunable bacterial-fungal hybrid promoter (Tet-on) which was switched on during early-exponential growth phase of A. niger cultures. The enniatins were isolated and purified by means of reverse phase chromatography and their identity and purity proven by tandem MS, NMR spectroscopy and X-ray crystallography. The initial yields of 1 mg • l−1 of enniatin were increased about 950 fold by optimizing feeding conditions and the morphology of A. niger in liquid shake flask cultures. Further yield optimization (about 4.5 fold) was accomplished by cultivating A. niger in 5 l fed batch fermentations. Finally, an autonomous A. niger expression host was established, which was independent from feeding with the enniatin precursor d-2-hydroxyvaleric acid d-Hiv. This was achieved by constitutively expressing a fungal d-Hiv dehydrogenase in the esyn1-expressing A. niger strain, which used the intracellular α-ketovaleric acid pool to generate d-Hiv. This is the first report demonstrating that A. niger is a potent and promising expression host for nonribosomal peptides with titers high enough to become industrially attractive. Application of the Tet-on system in A. niger allows precise control on the timing of product formation, thereby ensuring high yields and purity of the peptides produced.",
"title": ""
},
{
"docid": "neg:1840473_11",
"text": "Sound levels in animal shelters regularly exceed 100 dB. Noise is a physical stressor on animals that can lead to behavioral, physiological, and anatomical responses. There are currently no policies regulating noise levels in dog kennels. The objective of this study was to evaluate the noise levels dogs are exposed to in an animal shelter on a continuous basis and to determine the need, if any, for noise regulations. Noise levels at a newly constructed animal shelter were measured using a noise dosimeter in all indoor dog-holding areas. These holding areas included large dog adoptable, large dog stray, small dog adoptable, small dog stray, and front intake. The noise level was highest in the large adoptable area. Sound from the large adoptable area affected some of the noise measurements for the other rooms. Peak noise levels regularly exceeded the measuring capability of the dosimeter (118.9 dBA). Often, in new facility design, there is little attention paid to noise abatement, despite the evidence that noise causes physical and psychological stress on dogs. To meet their behavioral and physical needs, kennel design should also address optimal sound range.",
"title": ""
},
{
"docid": "neg:1840473_12",
"text": "Digital watermarking of multimedia content has become a very active research area over the last several years. A general framework for watermark embedding and detection/decoding is presented here along with a review of some of the algorithms for different media types described in the literature. We highlight some of the differences based on application such as copyright protection, authentication, tamper detection, and data hiding as well as differences in technology and system requirements for different media types such as digital images, video, audio and text.",
"title": ""
},
{
"docid": "neg:1840473_13",
"text": "The commoditization of high-performance networking has sparked research interest in the RDMA capability of this hardware. One-sided RDMA primitives, in particular, have generated substantial excitement due to the ability to directly access remote memory from within an application without involving the TCP/IP stack or the remote CPU. This paper considers how to leverage RDMA to improve the analytical performance of parallel database systems. To shuffle data efficiently using RDMA, one needs to consider a complex design space that includes (1) the number of open connections, (2) the contention for the shared network interface, (3) the RDMA transport function, and (4) how much memory should be reserved to exchange data between nodes during query processing. We contribute six designs that capture salient trade-offs in this design space. We comprehensively evaluate how transport-layer decisions impact the query performance of a database system for different generations of InfiniBand. We find that a shuffling operator that uses the RDMA Send/Receive transport function over the Unreliable Datagram transport service can transmit data up to 4× faster than an RDMA-capable MPI implementation in a 16-node cluster. The response time of TPC-H queries improves by as much as 2×.",
"title": ""
},
{
"docid": "neg:1840473_14",
"text": "Most computer systems currently consist of DRAM as main memory and hard disk drives (HDDs) as storage devices. Due to the volatile nature of DRAM, the main memory may suffer from data loss in the event of power failures or system crashes. With rapid development of new types of non-volatile memory (NVRAM), such as PCM, Memristor, and STT-RAM, it becomes likely that one of these technologies will replace DRAM as main memory in the not-too-distant future. In an NVRAM based buffer cache, any updated pages can be kept longer without the urgency to be flushed to HDDs. This opens opportunities for designing new buffer cache policies that can achieve better storage performance. However, it is challenging to design a policy that can also increase the cache hit ratio. In this paper, we propose a buffer cache policy, named I/O-Cache, that regroups and synchronizes long sets of consecutive dirty pages to take advantage of HDDs' fast sequential access speed and the non-volatile property of NVRAM. In addition, our new policy can dynamically separate the whole cache into a dirty cache and a clean cache, according to the characteristics of the workload, to decrease storage writes. We evaluate our scheme with various traces. The experimental results show that I/O-Cache shortens I/O completion time, decreases the number of I/O requests, and improves the cache hit ratio compared with existing cache policies.",
"title": ""
},
{
"docid": "neg:1840473_15",
"text": "With the growth of mobile data application and the ultimate expectations of 5G technology, the need to expand the capacity of the wireless networks is inevitable. Massive MIMO technique is currently taking a major part of the ongoing research, and expected to be the key player in the new cellular technologies. This papers presents an overview of the major aspects related to massive MIMO design including, antenna array general design, configuration, and challenges, in addition to advanced beamforming techniques and channel modeling and estimation issues affecting the implementation of such systems.",
"title": ""
},
{
"docid": "neg:1840473_16",
"text": "Social networks such as Facebook, MySpace, and Twitter have become increasingly important for reaching millions of users. Consequently, spammers are increasing using such networks for propagating spam. Existing filtering techniques such as collaborative filters and behavioral analysis filters are able to significantly reduce spam, each social network needs to build its own independent spam filter and support a spam team to keep spam prevention techniques current. We propose a framework for spam detection which can be used across all social network sites. There are numerous benefits of the framework including: 1) new spam detected on one social network, can quickly be identified across social networks; 2) accuracy of spam detection will improve with a large amount of data from across social networks; 3) other techniques (such as blacklists and message shingling) can be integrated and centralized; 4) new social networks can plug into the system easily, preventing spam at an early stage. We provide an experimental study of real datasets from social networks to demonstrate the flexibility and feasibility of our framework.",
"title": ""
},
{
"docid": "neg:1840473_17",
"text": "In this paper we present the preliminary work of a Basque poetry generation system. Basically, we have extracted the POS-tag sequences from some verse corpora and calculated the probability of each sequence. For the generation process we have defined 3 different experiments: Based on a strophe from the corpora, we (a) replace each word with other according to its POS-tag and suffixes, (b) replace each noun and adjective with another equally inflected word and (c) replace only nouns with semantically related ones (inflected). Finally we evaluate those strategies using a Turing Test-like evaluation.",
"title": ""
},
{
"docid": "neg:1840473_18",
"text": "Semi-Non-negative Matrix Factorization is a technique that learns a low-dimensional representation of a dataset that lends itself to a clustering interpretation. It is possible that the mapping between this new representation and our original data matrix contains rather complex hierarchical information with implicit lower-level hidden attributes, that classical one level clustering methodologies cannot interpret. In this work we propose a novel model, Deep Semi-NMF, that is able to learn such hidden representations that allow themselves to an interpretation of clustering according to different, unknown attributes of a given dataset. We also present a semi-supervised version of the algorithm, named Deep WSF, that allows the use of (partial) prior information for each of the known attributes of a dataset, that allows the model to be used on datasets with mixed attribute knowledge. Finally, we show that our models are able to learn low-dimensional representations that are better suited for clustering, but also classification, outperforming Semi-Non-negative Matrix Factorization, but also other state-of-the-art methodologies variants.",
"title": ""
},
{
"docid": "neg:1840473_19",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
}
] |
1840474 | Compact Offset Microstrip-Fed MIMO Antenna for Band-Notched UWB Applications | [
{
"docid": "pos:1840474_0",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm2, and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.",
"title": ""
},
{
"docid": "pos:1840474_1",
"text": "This paper introduces a coupling element to enhance the isolation between two closely packed antennas operating at the same frequency band. The proposed structure consists of two antenna elements and a coupling element which is located in between the two antenna elements. The idea is to use field cancellation to enhance isolation by putting a coupling element which artificially creates an additional coupling path between the antenna elements. To validate the idea, a design for a USB dongle MIMO antenna for the 2.4 GHz WLAN band is presented. In this design, the antenna elements are etched on a compact low-cost FR4 PCB board with dimensions of 20times40times1.6 mm3. According to our measurement results, we can achieve more than 30 dB isolation between the antenna elements even though the two parallel individual planar inverted F antenna (PIFA) in the design share a solid ground plane with inter-antenna spacing (Center to Center) of less than 0.095 lambdao or edge to edge separations of just 3.6 mm (0.0294 lambdao). Both simulation and measurement results are used to confirm the antenna isolation and performance. The method can also be applied to different types of antennas such as non-planar antennas. Parametric studies and current distribution for the design are also included to show how to tune the structure and control the isolation.",
"title": ""
}
] | [
{
"docid": "neg:1840474_0",
"text": "Nowadays, security solutions are mainly focused on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In fact, requirements engineering often neglects enough attention to security concerns. In this paper it will be presented a case study of our proposal, called SREP (Security Requirements Engineering Process), which is a standard-centred process and a reuse-based approach which deals with the security requirements at the earlier stages of software development in a systematic and intuitive way by providing a security resources repository and by integrating the Common Criteria into the software development lifecycle. In brief, a case study is shown in this paper demonstrating how the security requirements for a security critical IS can be obtained in a guided and systematic way by applying SREP.",
"title": ""
},
{
"docid": "neg:1840474_1",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "neg:1840474_2",
"text": "Global contrast considers the color difference between a target region or pixel and the rest of the image. It is frequently used to measure the saliency of the region or pixel. In previous global contrast-based methods, saliency is usually measured by the sum of contrast from the entire image. We find that the spatial distribution of contrast is one important cue of saliency that is neglected by previous works. Foreground pixel usually has high contrast from all directions, since it is surrounded by the background. Background pixel often shows low contrast in at least one direction, as it has to connect to the background. Motivated by this intuition, we first compute directional contrast from different directions for each pixel, and propose minimum directional contrast (MDC) as raw saliency metric. Then an O(1) computation of MDC using integral image is proposed. It takes only 1.5 ms for an input image of the QVGA resolution. In saliency post-processing, we use marker-based watershed algorithm to estimate each pixel as foreground or background, followed by one linear function to highlight or suppress its saliency. Performance evaluation is carried on four public data sets. The proposed method significantly outperforms other global contrast-based methods, and achieves comparable or better performance than the state-of-the-art methods. The proposed method runs at 300 FPS and shows six times improvement in runtime over the state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840474_3",
"text": "Cloud-based radio access networks (C-RAN) have been proposed as a cost-efficient way of deploying small cells. Unlike conventional RANs, a C-RAN decouples the baseband processing unit (BBU) from the remote radio head (RRH), allowing for centralized operation of BBUs and scalable deployment of light-weight RRHs as small cells. In this work, we argue that the intelligent configuration of the front-haul network between the BBUs and RRHs, is essential in delivering the performance and energy benefits to the RAN and the BBU pool, respectively. We then propose FluidNet - a scalable, light-weight framework for realizing the full potential of C-RAN. FluidNet deploys a logically re-configurable front-haul to apply appropriate transmission strategies in different parts of the network and hence cater effectively to both heterogeneous user profiles and dynamic traffic load patterns. FluidNet's algorithms determine configurations that maximize the traffic demand satisfied on the RAN, while simultaneously optimizing the compute resource usage in the BBU pool. We prototype FluidNet on a 6 BBU, 6 RRH WiMAX C-RAN testbed. Prototype evaluations and large-scale simulations reveal that FluidNet's ability to re-configure its front-haul and tailor transmission strategies provides a 50% improvement in satisfying traffic demands, while reducing the compute resource usage in the BBU pool by 50% compared to baseline transmission schemes.",
"title": ""
},
{
"docid": "neg:1840474_4",
"text": "BACKGROUND\nRecent studies demonstrate that low-level laser therapy (LLLT) modulates many biochemical processes, especially the decrease of muscle injures, the increase in mitochondrial respiration and ATP synthesis for accelerating the healing process.\n\n\nOBJECTIVE\nIn this work, we evaluated mitochondrial respiratory chain complexes I, II, III and IV and succinate dehydrogenase activities after traumatic muscular injury.\n\n\nMETHODS\nMale Wistar rats were randomly divided into three groups (n=6): sham (uninjured muscle), muscle injury without treatment, muscle injury with LLLT (AsGa) 5J/cm(2). Gastrocnemius injury was induced by a single blunt-impact trauma. LLLT was used 2, 12, 24, 48, 72, 96, and 120 hours after muscle-trauma.\n\n\nRESULTS\nOur results showed that the activities of complex II and succinate dehydrogenase after 5days of muscular lesion were significantly increased when compared to the control group. Moreover, our results showed that LLLT significantly increased the activities of complexes I, II, III, IV and succinate dehydrogenase, when compared to the group of injured muscle without treatment.\n\n\nCONCLUSION\nThese results suggest that the treatment with low-level laser may induce an increase in ATP synthesis, and that this may accelerate the muscle healing process.",
"title": ""
},
{
"docid": "neg:1840474_5",
"text": "Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions.",
"title": ""
},
{
"docid": "neg:1840474_6",
"text": "Potassium based ceramic materials composed from leucite in which 5 % of Al is exchanged with Fe and 4 % of hematite was synthesized by mechanochemical homogenization and annealing of K2O-SiO2-Al2O3-Fe2O3 mixtures. Synthesized material was characterized by X-ray Powder Diffraction (XRPD) and Scanning Electron Microscopy coupled with Energy Dispersive X-ray spectroscopy (SEM/EDX). The two methods are in good agreement in regard to the specimen chemical composition suggesting that a leucite chemical formula is K0.8Al0.7Fe0.15Si2.25O6. Rietveld structure refinement results reveal that about 20 % of vacancies exist in the position of K atoms.",
"title": ""
},
{
"docid": "neg:1840474_7",
"text": "It is hard to estimate optical flow given a realworld video sequence with camera shake and other motion blur. In this paper, we first investigate the blur parameterization for video footage using near linear motion elements. We then combine a commercial 3D pose sensor with an RGB camera, in order to film video footage of interest together with the camera motion. We illustrates that this additional camera motion/trajectory channel can be embedded into a hybrid framework by interleaving an iterative blind deconvolution and warping based optical flow scheme. Our method yields improved accuracy within three other state-of-the-art baselines given our proposed ground truth blurry sequences; and several other realworld sequences filmed by our imaging system.",
"title": ""
},
{
"docid": "neg:1840474_8",
"text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.",
"title": ""
},
{
"docid": "neg:1840474_9",
"text": "Spark, a subset of Ada for engineering safety and security-critical systems, is one of the best commercially available frameworks for formal-methodssupported development of critical software. Spark is designed for verification and includes a software contract language for specifying functional properties of procedures. Even though Spark and its static analysis components are beneficial and easy to use, its contract language is almost never used due to the burdens the associated tool support imposes on developers. Symbolic execution (SymExe) techniques have made significant strides in automating reasoning about deep semantic properties of source code. However, most work on SymExe has focused on bugfinding and test case generation as opposed to tasks that are more verificationoriented such as contract checking. In this paper, we present: (a) SymExe techniques for checking software contracts in embedded critical systems, and (b) Bakar Kiasan, a tool that implements these techniques in an integrated development environment for Spark. We describe a methodology for using Bakar Kiasan that provides significant increases in automation, usability, and functionality over existing Spark tools, and we present results from experiments on its application to industrial examples.",
"title": ""
},
{
"docid": "neg:1840474_10",
"text": "In this paper we argue for the use of Unstructured Supplementary Service Data (USSD) as a platform for universal cell phone applications. We examine over a decade of ICT4D research, analyzing how USSD can extend and complement current uses of IVR and SMS for data collection, messaging, information access, social networking and complex user initiated transactions. Based on these findings we identify situations when a mobile based project should consider using USSD with increasingly common third party gateways over other mediums. This analysis also motivates the design and implementation of an open source library for rapid development of USSD applications. Finally, we explore three USSD use cases, demonstrating how USSD opens up a design space not available with IVR or SMS.",
"title": ""
},
{
"docid": "neg:1840474_11",
"text": "Graphs can represent biological networks at the molecular, protein, or species level. An important query is to find all matches of a pattern graph to a target graph. Accomplishing this is inherently difficult (NP-complete) and the efficiency of heuristic algorithms for the problem may depend upon the input graphs. The common aim of existing algorithms is to eliminate unsuccessful mappings as early as and as inexpensively as possible. We propose a new subgraph isomorphism algorithm which applies a search strategy to significantly reduce the search space without using any complex pruning rules or domain reduction procedures. We compare our method with the most recent and efficient subgraph isomorphism algorithms (VFlib, LAD, and our C++ implementation of FocusSearch which was originally distributed in Modula2) on synthetic, molecules, and interaction networks data. We show a significant reduction in the running time of our approach compared with these other excellent methods and show that our algorithm scales well as memory demands increase. Subgraph isomorphism algorithms are intensively used by biochemical tools. Our analysis gives a comprehensive comparison of different software approaches to subgraph isomorphism highlighting their weaknesses and strengths. This will help researchers make a rational choice among methods depending on their application. We also distribute an open-source package including our system and our own C++ implementation of FocusSearch together with all the used datasets ( http://ferrolab.dmi.unict.it/ri.html ). In future work, our findings may be extended to approximate subgraph isomorphism algorithms.",
"title": ""
},
{
"docid": "neg:1840474_12",
"text": "Can altmetric data be validly used for the measurement of societal impact? The current study seeks to answer this question with a comprehensive dataset (about 100,000 records) from very disparate sources (F1000, Altmetric, and an in-house database based on Web of Science). In the F1000 peer review system, experts attach particular tags to scientific papers which indicate whether a paper could be of interest for science or rather for other segments of society. The results show that papers with the tag\"good for teaching\"do achieve higher altmetric counts than papers without this tag - if the quality of the papers is controlled. At the same time, a higher citation count is shown especially by papers with a tag that is specifically scientifically oriented (\"new finding\"). The findings indicate that papers tailored for a readership outside the area of research should lead to societal impact. If altmetric data is to be used for the measurement of societal impact, the question arises of its normalization. In bibliometrics, citations are normalized for the papers' subject area and publication year. This study has taken a second analytic step involving a possible normalization of altmetric data. As the results show there are particular scientific topics which are of especial interest for a wide audience. Since these more or less interesting topics are not completely reflected in Thomson Reuters' journal sets, a normalization of altmetric data should not be based on the level of subject categories, but on the level of topics.",
"title": ""
},
{
"docid": "neg:1840474_13",
"text": "Working under a model of privacy in which data remains private even from the statistician, we study the tradeoff between privacy guarantees and the risk of the resulting statistical estimators. We develop private versions of classical information-theoretic bounds, in particular those due to Le Cam, Fano, and Assouad. These inequalities allow for a precise characterization of statistical rates under local privacy constraints and the development of provably (minimax) optimal estimation procedures. We provide a treatment of several canonical families of problems: mean estimation and median estimation, multinomial probability estimation, and nonparametric density estimation. For all of these families, we provide lower and upper bounds that match up to constant factors, and exhibit new (optimal) privacy-preserving mechanisms and computationally efficient estimators that achieve the bounds. Additionally, we present a variety of experimental results for estimation problems involving sensitive data, including salaries, censored blog posts and articles, and drug abuse; these experiments demonstrate the importance of deriving optimal procedures.",
"title": ""
},
{
"docid": "neg:1840474_14",
"text": "Brown adipose tissue (BAT) is specialized to dissipate chemical energy in the form of heat as a defense against cold and excessive feeding. Interest in the field of BAT biology has exploded in the past few years because of the therapeutic potential of BAT to counteract obesity and obesity-related diseases, including insulin resistance. Much progress has been made, particularly in the areas of BAT physiology in adult humans, developmental lineages of brown adipose cell fate, and hormonal control of BAT thermogenesis. As we enter into a new era of brown fat biology, the next challenge will be to develop strategies for activating BAT thermogenesis in adult humans to increase whole-body energy expenditure. This article reviews the recent major advances in this field and discusses emerging questions.",
"title": ""
},
{
"docid": "neg:1840474_15",
"text": "Accurately recognizing speaker emotion and age/gender from speech can provide better user experience for many spoken dialogue systems. In this study, we propose to use deep neural networks (DNNs) to encode each utterance into a fixed-length vector by pooling the activations of the last hidden layer over time. The feature encoding process is designed to be jointly trained with the utterance-level classifier for better classification. A kernel extreme learning machine (ELM) is further trained on the encoded vectors for better utterance-level classification. Experiments on a Mandarin dataset demonstrate the effectiveness of our proposed methods on speech emotion and age/gender recognition tasks.",
"title": ""
},
{
"docid": "neg:1840474_16",
"text": "Verbal fluency tasks have long been used to assess and estimate group and individual differences in executive functioning in both cognitive and neuropsychological research domains. Despite their ubiquity, however, the specific component processes important for success in these tasks have remained elusive. The current work sought to reveal these various components and their respective roles in determining performance in fluency tasks using latent variable analysis. Two types of verbal fluency (semantic and letter) were compared along with several cognitive constructs of interest (working memory capacity, inhibition, vocabulary size, and processing speed) in order to determine which constructs are necessary for performance in these tasks. The results are discussed within the context of a two-stage cyclical search process in which participants first search for higher order categories and then search for specific items within these categories.",
"title": ""
},
{
"docid": "neg:1840474_17",
"text": "The measurement of safe driving distance based on stereo vision is proposed. The model of camera imaging is established using traditional camera calibration method firstly. Secondly, the projection matrix is deduced according to camera's internal and external parameter and used to calibrate the camera. The method of camera calibration based on two-dimensional target plane is adopted. Then the distortion parameters are calculated when the nonlinear geometric model of camera imaging is built. Moreover, the camera's internal and external parameters are optimized on the basis of the projection error' least squares criterion so that the un-distortion image can be obtained. The matching is done between the left image and the right image corresponding to angular point. The parallax error and the distance between the target vehicle and the camera can be calculated. The experimental results show that the measurement scheme is an effective one in a security vehicles spacing survey. The proposed system is convenient for driver to control in time and precisely. It is able to increase the security in intelligent transportation vehicles.",
"title": ""
},
{
"docid": "neg:1840474_18",
"text": "We describe mechanical metamaterials created by folding flat sheets in the tradition of origami, the art of paper folding, and study them in terms of their basic geometric and stiffness properties, as well as load bearing capability. A periodic Miura-ori pattern and a non-periodic Ron Resch pattern were studied. Unexceptional coexistence of positive and negative Poisson's ratio was reported for Miura-ori pattern, which are consistent with the interesting shear behavior and infinity bulk modulus of the same pattern. Unusually strong load bearing capability of the Ron Resch pattern was found and attributed to the unique way of folding. This work paves the way to the study of intriguing properties of origami structures as mechanical metamaterials.",
"title": ""
}
] |
1840475 | Add English to image Chinese captioning | [
{
"docid": "pos:1840475_0",
"text": "This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description.",
"title": ""
},
{
"docid": "pos:1840475_1",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
},
{
"docid": "pos:1840475_2",
"text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"title": ""
}
] | [
{
"docid": "neg:1840475_0",
"text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].",
"title": ""
},
{
"docid": "neg:1840475_1",
"text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840475_2",
"text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.",
"title": ""
},
{
"docid": "neg:1840475_3",
"text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.",
"title": ""
},
{
"docid": "neg:1840475_4",
"text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.",
"title": ""
},
{
"docid": "neg:1840475_5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "neg:1840475_6",
"text": "The Intelligence in Wikipedia project at the University of Washington is combining self-supervised information extraction (IE) techniques with a mixed initiative interface designed to encourage communal content creation (CCC). Since IE and CCC are each powerful ways to produce large amounts of structured information, they have been studied extensively — but only in isolation. By combining the two methods in a virtuous feedback cycle, we aim for substantial synergy. While previous papers have described the details of individual aspects of our endeavor [25, 26, 24, 13], this report provides an overview of the project’s progress and vision.",
"title": ""
},
{
"docid": "neg:1840475_7",
"text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.",
"title": ""
},
{
"docid": "neg:1840475_8",
"text": "Corneal topography is a non-invasive medical imaging techniqueto assess the shape of the cornea in ophthalmology. In this paper we demonstrate that in addition to its health care use, corneal topography could provide valuable biometric measurements for person authentication. To extract a feature vector from these images (topographies), we propose to fit the geometry of the corneal surface with Zernike polynomials, followed by a linear discriminant analysis (LDA) of the Zernike coefficients to select the most discriminating features. The results show that the proposed method reduced the typical d-dimensional Zernike feature vector (d=36) into a much lower r-dimensional feature vector (r=3), and improved the Equal Error Rate from 2.88% to 0.96%, with the added benefit of faster computation time.",
"title": ""
},
{
"docid": "neg:1840475_9",
"text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.",
"title": ""
},
{
"docid": "neg:1840475_10",
"text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.",
"title": ""
},
{
"docid": "neg:1840475_11",
"text": "We prove that binary orthogonal arrays of strength 8, length 12 and cardinality 1536 do not exist. This implies the nonexistence of arrays of parameters (strength,length,cardinality) = (n, n + 4, 6.2) for every integer n ≥ 8.",
"title": ""
},
{
"docid": "neg:1840475_12",
"text": "An empirical study has been conducted investigating the relationship between the performance of an aspect based language model in terms of perplexity and the corresponding information retrieval performance obtained. It is observed, on the corpora considered, that the perplexity of the language model has a systematic relationship with the achievable precision recall performance though it is not statistically significant.",
"title": ""
},
{
"docid": "neg:1840475_13",
"text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.",
"title": ""
},
{
"docid": "neg:1840475_14",
"text": "There is an increasing interest in employing multiple sensors for surveillance and communications. Some of the motivating factors are reliability, survivability, increase in the number of targets under consideration, and increase in required coverage. Tenney and Sandell have recently treated the Bayesian detection problem with distributed sensors. They did not consider the design of data fusion algorithms. We present an optimum data fusion structure given the detectors. Individual decisions are weighted according to the reliability of the detector and then a threshold comparison is performed to obtain the global decision.",
"title": ""
},
{
"docid": "neg:1840475_15",
"text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "neg:1840475_16",
"text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.",
"title": ""
},
{
"docid": "neg:1840475_17",
"text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.",
"title": ""
},
{
"docid": "neg:1840475_18",
"text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.",
"title": ""
},
{
"docid": "neg:1840475_19",
"text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.",
"title": ""
}
] |
1840476 | Botnet Research Survey | [
{
"docid": "pos:1840476_0",
"text": "Malicious botnets are networks of compromised computers that are controlled remotely to perform large-scale distributed denial-of-service (DDoS) attacks, send spam, trojan and phishing emails, distribute pirated media or conduct other usually illegitimate activities. This paper describes a methodology to detect, track and characterize botnets on a large Tier-1 ISP network. The approach presented here differs from previous attempts to detect botnets by employing scalable non-intrusive algorithms that analyze vast amounts of summary traffic data collected on selected network links. Our botnet analysis is performed mostly on transport layer data and thus does not depend on particular application layer information. Our algorithms produce alerts with information about controllers. Alerts are followed up with analysis of application layer data, that indicates less than 2% false positive rates.",
"title": ""
}
] | [
{
"docid": "neg:1840476_0",
"text": "Recent years have seen exciting developments in join algorithms. In 2008, Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum result size of a full conjunctive query, given constraints on the input rel ation sizes. In 2012, Ngo, Porat, R «e and Rudra (henceforth NPRR) devised a join algorithm with worst-case running time proportional to the AGM bound [8]. Our commercial database system LogicBlox employs a novel join algorithm, leapfrog triejoin, which compared conspicuously well to the NPRR algorithm in preliminary benchmarks. This spurred us to analyze the complexity of leapfrog triejoin. In this pa per we establish that leapfrog triejoin is also worst-case o ptimal, up to a log factor, in the sense of NPRR. We improve on the results of NPRR by proving that leapfrog triejoin achieves worst-case optimality for finer-grained classes o f database instances, such as those defined by constraints on projection cardinalities. We show that NPRR is not worstcase optimal for such classes, giving a counterexamplewher e leapfrog triejoin runs inO(n log n) time and NPRR runs in Θ(n) time. On a practical note, leapfrog triejoin can be implemented using conventional data structures such as B-trees, and extends naturally to ∃1 queries. We believe our algorithm offers a useful addition to the existing toolbox o f join algorithms, being easy to absorb, simple to implement, and having a concise optimality proof.",
"title": ""
},
{
"docid": "neg:1840476_1",
"text": "Aspect-based opinion mining has attracted lots of attention today. In this thesis, we address the problem of product aspect rating prediction, where we would like to extract the product aspects, and predict aspect ratings simultaneously. Topic models have been widely adapted to jointly model aspects and sentiments, but existing models may not do the prediction task well due to their weakness in sentiment extraction. The sentiment topics usually do not have clear correspondence to commonly used ratings, and the model may fail to extract certain kinds of sentiments due to skewed data. To tackle this problem, we propose a sentiment-aligned topic model(SATM), where we incorporate two types of external knowledge: product-level overall rating distribution and word-level sentiment lexicon. Experiments on real dataset demonstrate that SATM is effective on product aspect rating prediction, and it achieves better performance compared to the existing approaches.",
"title": ""
},
{
"docid": "neg:1840476_2",
"text": "Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant.",
"title": ""
},
{
"docid": "neg:1840476_3",
"text": "The objective of ent i ty identification i s t o determine the correspondence between object instances f r o m more than one database. This paper ezamines the problem at the instance level assuming that schema level heterogeneity has been resolved a priori . Soundness and completeness are defined as the desired properties of any ent i ty identification technique. To achieve soundness, a set of ident i ty and distinctness rules are established for enti t ies in the integrated world. W e propose the use of eztended key, which i s the union of keys (and possibly other attributes) f r o m the relations t o be matched, and i t s corresponding ident i ty rule, t o determine the equivalence between tuples f r o m relations which m a y not share any common key. Instance level funct ional dependencies (ILFD), a f o r m of semantic constraint information about the real-world entities, are used t o derive the missing eztended key attribute values of a tuple.",
"title": ""
},
{
"docid": "neg:1840476_4",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
},
{
"docid": "neg:1840476_5",
"text": "Because the World Wide Web consists primarily of text, information extraction is central to any e ort that would use the Web as a resource for knowledge discovery. We show how information extraction can be cast as a standard machine learning problem, and argue for the suitability of relational learning in solving it. The implementation of a general-purpose relational learner for information extraction, SRV, is described. In contrast with earlier learning systems for information extraction, SRV makes no assumptions about document structure and the kinds of information available for use in learning extraction patterns. Instead, structural and other information is supplied as input in the form of an extensible token-oriented feature set. We demonstrate the e ectiveness of this approach by adapting SRV for use in learning extraction rules for a domain consisting of university course and research project pages sampled from the Web. Making SRV Web-ready only involves adding several simple HTML-speci c features to its basic feature set. The World Wide Web, with its explosive growth and ever-broadening reach, is swiftly becoming the default knowledge resource for many areas of endeavor. Unfortunately, although any one of over 200,000,000 Web pages is readily accessible to an Internet-connected workstation, the information content of these pages is, without human interpretation, largely inaccessible. Systems have been developed which can make sense of highly regularWeb pages, such as those generated automatically from internal databases in response to user queries (Doorenbos, Etzioni, & Weld 1997) (Kushmerick 1997). A surprising number of Web sites have pages amenable to the techniques used by these systems. Still, most Web pages do not exhibit the regularity required by they require. There is a larger class of pages, however, which are regular in a more abstract sense. ManyWeb pages come from collections in which each page describes a single entity or event (e.g., home pages in a CS department; each describes its owner). The purpose of such a page is often to convey essential facts about the entity it Copyright c 1998, American Association for Arti cial Intelligence (www.aaai.org). All rights reserved. describes. It is often reasonable to approach such a page with a set of standard questions, and to expect that the answers to these questions will be available as succinct text fragments in the page. A home page, for example, frequently lists the owner's name, a liations, email address, etc. The problem of identifying the text fragments that answer standard questions de ned for a document collection is called information extraction (IE) (Def 1995). Our interest in IE concerns the development of machine learning methods to solve it. We regard IE as a kind of text classi cation, which has strong a nities with the well-investigated problem of document classi cation, but also presents unique challenges. We share this focus with a number of other recent systems (Soderland 1996) (Cali & Mooney 1997), including a system designed to learn how to extract from HTML (Soderland 1997). In this paper we describe SRV, a top-down relational algorithm for information extraction. Central to the design of SRV is its reliance on a set of token-oriented features, which are easy to implement and add to the system. Since domain-speci c information is contained within this features, which are separate from the core algorithm, SRV is better poised than similar systems for targeting to new domains. We have used it to perform extraction from electronic seminar announcements, medical abstracts, and newswire articles on corporate acquisitions. The experiments reported here show that targeting the system to HTML involves nothing more than the addition of HTML-speci c features to its basic feature set. Learning for Information Extraction Consider a collection of Web pages describing university computer science courses. Given a page, a likely task for an information extraction system is to nd the title of the course the page describes. We call the title a eld and any literal title taken from an actual page, such as \\Introduction to Arti cial Intelligence,\" an instantiation or instance of the title eld. Note that the typical information extraction problem involves multiple elds, some of which may have multiple instantiations in a given le. For example, a course page might From: AAAI-98 Proceedings. Copyright © 1998, AAAI (www.aaai.org). All rights reserved.",
"title": ""
},
{
"docid": "neg:1840476_6",
"text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.",
"title": ""
},
{
"docid": "neg:1840476_7",
"text": "Multiple-antenna receivers offer numerous advantages over single-antenna receivers, including sensitivity improvement, ability to reject interferers spatially and enhancement of data-rate or link reliability via MIMO. In the recent past, RF/analog phased-array receivers have been investigated [1-4]. On the other hand, digital beamforming offers far greater flexibility, including ability to form multiple simultaneous beams, ease of digital array calibration and support for MIMO. However, ADC dynamic range is challenged due to the absence of spatial interference rejection at RF/analog.",
"title": ""
},
{
"docid": "neg:1840476_8",
"text": "Skin whitening products are commercially available for cosmetic purposes in order to obtain a lighter skin appearance. They are also utilized for clinical treatment of pigmentary disorders such as melasma or postinflammatory hyperpigmentation. Whitening agents act at various levels of melanin production in the skin. Many of them are known as competitive inhibitors of tyrosinase, the key enzyme in melanogenesis. Others inhibit the maturation of this enzyme or the transport of pigment granules (melanosomes) from melanocytes to surrounding keratinocytes. In this review we present an overview of (natural) whitening products that may decrease skin pigmentation by their interference with the pigmentary processes.",
"title": ""
},
{
"docid": "neg:1840476_9",
"text": "Abrlracr-A h u l a for the cppecity et arbitrary sbgle-wer chrurwla without feedback (mot neccgdueily Wium\" stable, stationary, etc.) is proved. Capacity ie shown to e i p l the supremum, over all input processts, & the input-outpat infiqjknda QBnd as the llnainl ia praabiutJr d the normalized information density. The key to thir zbllljt is a ntw a\"c sppmrh bosed 811 a Ampie II(A Lenar trwrd eu the pralwbility of m-4v hgpothesb t#tcl UIOlls eq*rdIaN <hypotheses. A neassruy and d c i e n t coadition Eor the validity of the strong comeme is given, as well as g\"l expressions for eeapacity.",
"title": ""
},
{
"docid": "neg:1840476_10",
"text": "Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing. Usually, this selection is implemented in the form of a spatially circumscribed region of the visual field, the so-called \"focus of attention\" which scans the visual scene dependent on the input and on the attentional state of the subject. We here present a model for the control of the focus of attention in primates, based on a saliency map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.",
"title": ""
},
{
"docid": "neg:1840476_11",
"text": "In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach.",
"title": ""
},
{
"docid": "neg:1840476_12",
"text": "Over the last decade or so, it has become increasingly clear to many cognitive scientists that research into human language (and cognition in general, for that matter) has largely neglected how language and thought are embedded in the body and the world. As argued by, for instance, Clark (1997), cognition is fundamentally embodied, that is, it can only be studied in relation to human action, perception, thought, and experience. As Feldman puts it: \" Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptations that enable creatures like us to thrive in a wide range of situations \" (p. 7). Although it may seem paradoxical to try formalizing this view in a computational theory of language comprehension, this is exactly what From Molecule to Metaphor does. Starting from the assumption that human thought is neural computation, Feldman develops a computational theory that takes the embodied nature of language into account: the neural theory of language. The book comprises 27 short chapters, distributed over nine parts. Part I presents the basic ideas behind embodied language and cognition and explains how the embodiment of language is apparent in the brain: The neural circuits involved in a particular experience or action are, for a large part, the same circuits involved in processing language about this experience or action. Part II discusses neural computation, starting from the molecules that take part in information processing by neurons. This detailed exposition is followed by a description of neuronal networks in the human body, in particular in the brain. The description of the neural theory of language begins in Part III, where it is explained how localist neural networks, often used as psycholinguistic models, can represent the meaning of concepts. This is done by introducing triangle nodes into the network. Each triangle node connects the nodes representing a concept, a role, and a filler—for example, \" pea, \" \" has-color, \" and \" green. \" Such networks are trained by a process called recruitment learning, which is described only very informally. This is certainly an interesting idea for combining propositional and connectionist models, but it does leave the reader with a number of questions. For instance, how is the concept distinguished from the filler when they can be interchanged, as …",
"title": ""
},
{
"docid": "neg:1840476_13",
"text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.",
"title": ""
},
{
"docid": "neg:1840476_14",
"text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.",
"title": ""
},
{
"docid": "neg:1840476_15",
"text": "This is a landmark book. For anyone interested in language, in dictionaries and thesauri, or natural language processing, the introduction, Chapters 14, and Chapter 16 are must reading. (Select other chapters according to your special interests; see the chapter-by-chapter review). These chapters provide a thorough introduction to the preeminent electronic lexical database of today in terms of accessibility and usage in a wide range of applications. But what does that have to do with digital libraries? Natural language processing is essential for dealing efficiently with the large quantities of text now available online: fact extraction and summarization, automated indexing and text categorization, and machine translation. Another essential function is helping the user with query formulation through synonym relationships between words and hierarchical and other relationships between concepts. WordNet supports both of these functions and thus deserves careful study by the digital library community.",
"title": ""
},
{
"docid": "neg:1840476_16",
"text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.",
"title": ""
},
{
"docid": "neg:1840476_17",
"text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.",
"title": ""
},
{
"docid": "neg:1840476_18",
"text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the",
"title": ""
},
{
"docid": "neg:1840476_19",
"text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.",
"title": ""
}
] |
1840477 | Beyond Trending Topics: Real-World Event Identification on Twitter | [
{
"docid": "pos:1840477_0",
"text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "pos:1840477_1",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] | [
{
"docid": "neg:1840477_0",
"text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.",
"title": ""
},
{
"docid": "neg:1840477_1",
"text": "This work is aimed at modeling, designing and developing an egg incubator system that is able to incubate various types of egg within the temperature range of 35 – 40 0 C. This system uses temperature and humidity sensors that can measure the condition of the incubator and automatically change to the suitable condition for the egg. Extreme variations in incubation temperature affect the embryo and ultimately, post hatch performance. In this work, electric bulbs were used to give the suitable temperature to the egg whereas water and controlling fan were used to ensure that humidity and ventilation were in good condition. LCD is used to display status condition of the incubator and an interface (Keypad) is provided to key in the appropriate temperature range for the egg. To ensure that all part of the eggs was heated by the lamp, DC motor was used to rotate iron rod at the bottom side and automatically change position of the egg. The entire element is controlled using AT89C52 Microcontroller. The temperature of the incubator is maintained at the normal temperature using PID controller implemented in microcontroller. Mathematical model of the incubator, actuator and PID controller were developed. Controller design based on the models was developed using Matlab Simulink. The models were validated through simulation and the Zeigler-Nichol tuning method was adopted as the tuning technique for varying the temperature control parameters of the PID controller in order to achieve a desirable transient response of the system when subjected to a unit step input. After several assumptions and simulations, a set of optimal parameters were obtained at the result of the third test that exhibited a commendable improvement in the overshoot, rise time, peak time and settling time thus improving the robustness and stability of the system. Keyword: Egg Incubator System, AT89C52 Microcontroller, PID Controller, Temperature Sensor.",
"title": ""
},
{
"docid": "neg:1840477_2",
"text": "Imbalanced data learning is one of the challenging problems in data mining; among this matter, founding the right model assessment measures is almost a primary research issue. Skewed class distribution causes a misreading of common evaluation measures as well it lead a biased classification. This article presents a set of alternative for imbalanced data learning assessment, using a combined measures (G-means, likelihood ratios, Discriminant power, F-Measure Balanced Accuracy, Youden index, Matthews correlation coefficient), and graphical performance assessment (ROC curve, Area Under Curve, Partial AUC, Weighted AUC, Cumulative Gains Curve and lift chart, Area Under Lift AUL), that aim to provide a more credible evaluation. We analyze the applications of these measures in churn prediction models evaluation, a well known application of imbalanced data",
"title": ""
},
{
"docid": "neg:1840477_3",
"text": "Given a matrix A ∈ R, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of A and then retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a recent, elegant non-commutative Bernstein inequality, and compare our bounds with all existing (to the best of our knowledge) elementwise matrix sparsification algorithms.",
"title": ""
},
{
"docid": "neg:1840477_4",
"text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.",
"title": ""
},
{
"docid": "neg:1840477_5",
"text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.",
"title": ""
},
{
"docid": "neg:1840477_6",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "neg:1840477_7",
"text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).",
"title": ""
},
{
"docid": "neg:1840477_8",
"text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840477_9",
"text": "The potential of high-resolution IKONOS and QuickBird satellite imagery for mapping and analysis of land and water resources at local scales in Minnesota is assessed in a series of three applications. The applications and accuracies evaluated include: (1) classification of lake water clarity (r = 0.89), (2) mapping of urban impervious surface area (r = 0.98), and (3) aquatic vegetation surveys of emergent and submergent plant groups (80% accuracy). There were several notable findings from these applications. For example, modeling and estimation approaches developed for Landsat TM data for continuous variables such as lake water clarity and impervious surface area can be applied to high-resolution satellite data. The rapid delivery of spatial data can be coupled with current GPS and field computer technologies to bring the imagery into the field for cover type validation. We also found several limitations in working with this data type. For example, shadows can influence feature classification and their effects need to be evaluated. Nevertheless, high-resolution satellite data has excellent potential to extend satellite remote sensing beyond what has been possible with aerial photography and Landsat data, and should be of interest to resource managers as a way to create timely and reliable assessments of land and water resources at a local scale. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840477_10",
"text": "The physical formats used to represent linguistic data and its annotations have evolved over the past four decades, accommodating different needs and perspectives as well as incorporating advances in data representation generally. This chapter provides an overview of representation formats with the aim of surveying the relevant issues for representing different data types together with current stateof-the-art solutions, in order to provide sufficient information to guide others in the choice of a representation format or formats.",
"title": ""
},
{
"docid": "neg:1840477_11",
"text": "In this paper, a low profile LLC resonant converter with two transformers using a planar core is proposed for a slim switching mode power supply (SMPS). Design procedures, magnetic modeling and voltage gain characteristics on the proposed planar transformer and converter are described in detail. LLC resonant converter including two transformers using a planar core is connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter is designed and tested.",
"title": ""
},
{
"docid": "neg:1840477_12",
"text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.",
"title": ""
},
{
"docid": "neg:1840477_13",
"text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull",
"title": ""
},
{
"docid": "neg:1840477_14",
"text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.",
"title": ""
},
{
"docid": "neg:1840477_15",
"text": "SaaS companies generate revenues by charging recurring subscription fees for using their software services. The fast growth of SaaS companies is usually accompanied with huge upfront costs in marketing expenses targeted at their potential customers. Customer retention is a critical issue for SaaS companies because it takes twelve months on average to break-even with the expenses for a single customer. This study describes a methodology for helping SaaS companies manage their customer relationships. We investigated the time-dependent software feature usage data, for example, login numbers and comment numbers, to predict whether a customer would churn within the next three months. Our study compared model performance across four classification algorithms. The XGBoost model yielded the best results for identifying the most important software usage features and for classifying customers as either churn type or non-risky type. Our model achieved a 10-fold cross-validated mean AUC score of 0.7941. Companies can choose to move along the ROC curve to accommodate to their marketing capability. The feature importance output from the XGBoost model can facilitate SaaS companies in identifying the most significant software features to launch more effective marketing campaigns when facing prospective customers.",
"title": ""
},
{
"docid": "neg:1840477_16",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "neg:1840477_17",
"text": "This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data. We combine a generative model parameterized by deep neural networks with non-linear embedding technique. It allows us to build prognostic models with the limited amount of health status information for the precise prediction of future asset reliability. The proposed method is evaluated on a publicly available dataset for remaining useful life (RUL) estimation, which shows significant improvement even when a fraction of the data with known health status is as sparse as 1% of the total. Our study suggests that the non-linear embedding based on a deep generative model can efficiently regularize a complex model with deep architectures while achieving high prediction accuracy that is far less sensitive to the availability of health status information.",
"title": ""
},
{
"docid": "neg:1840477_18",
"text": "Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in its original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving, forcing data analysts using it need to adapt their analytical processes after each release. This gets more challenging when aiming to perform an integrated or historical analysis of multiple sources. To cope with such complexity, in this paper we present the Big Data Integration ontology, the core construct for a data governance protocol that systematically annotates and integrates data from multiple sources in its original format. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. A functional evaluation on realworld APIs is performed in order to validate our approach.",
"title": ""
},
{
"docid": "neg:1840477_19",
"text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥",
"title": ""
}
] |
1840478 | A phase space model of Fourier ptychographic microscopy. | [
{
"docid": "pos:1840478_0",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
}
] | [
{
"docid": "neg:1840478_0",
"text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.",
"title": ""
},
{
"docid": "neg:1840478_1",
"text": "In this paper, we analyze the radio channel characteristics at mmWave frequencies for 5G cellular communications in urban scenarios. 3D-ray tracing simulations in the downtown areas of Ottawa and Chicago are conducted in both the 2 GHz and 28 GHz bands. Each area has two different deployment scenarios, with different transmitter height and different density of buildings. Based on the observations of the ray-tracing experiments, important parameters of the radio channel model, such as path loss exponent, shadowing variance, delay spread and angle spread, are provided, forming the basis of a mmWave channel model. Based on the analysis and the 3GPP 3D-Spatial Channel Model (SCM) framework, we introduce a a preliminary mmWave channel model at 28 GHz.",
"title": ""
},
{
"docid": "neg:1840478_2",
"text": "A novel robust adaptive beamforming method for conformal array is proposed. By using interpolation technique, the cylindrical conformal array with directional antenna elements is transformed to a virtual uniform linear array with omni-directional elements. This method can compensate the amplitude and mutual coupling errors as well as desired signal point errors of the conformal array efficiently. It is a universal method and can be applied to other curved conformal arrays. After the transformation, most of the existing adaptive beamforming algorithms can be applied to conformal array directly. The efficiency of the proposed scheme is assessed through numerical simulations.",
"title": ""
},
{
"docid": "neg:1840478_3",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "neg:1840478_4",
"text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.",
"title": ""
},
{
"docid": "neg:1840478_5",
"text": "In the era of the Social Web, crowdfunding has become an increasingly more important channel for entrepreneurs to raise funds from the crowd to support their startup projects. Previous studies examined various factors such as project goals, project durations, and categories of projects that might influence the outcomes of the fund raising campaigns. However, textual information of projects has rarely been studied for analyzing crowdfunding successes. The main contribution of our research work is the design of a novel text analytics-based framework that can extract latent semantics from the textual descriptions of projects to predict the fund raising outcomes of these projects. More specifically, we develop the Domain-Constraint Latent Dirichlet Allocation (DC-LDA) topic model for effective extraction of topical features from texts. Based on two real-world crowdfunding datasets, our experimental results reveal that the proposed framework outperforms a classical LDA-based method in predicting fund raising success by an average of 11% in terms of F1 score. The managerial implication of our research is that entrepreneurs can apply the proposed methodology to identify the most influential topical features embedded in project descriptions, Corresponding author at: School of Information, Renmin University of China, Beijing, 100872, P.R. China. Email address: hui.yuan@my.cityu.edu.hk (H. Yuan), raylau@cityu.edu.hk (R.Y.K. Lau), weixu@ruc.edu.cn (W. Xu) AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 2 and hence to better promote their projects and improving the chance of raising sufficient funds for their projects.",
"title": ""
},
{
"docid": "neg:1840478_6",
"text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840478_7",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "neg:1840478_8",
"text": "We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence. These models lead to better performance, both in terms of general translation quality and pronoun prediction, when trained on small corpora, although this improvement largely disappears when trained with a larger corpus. We also discover that attention-based neural machine translation is well suited for pronoun prediction and compares favorably with other approaches that were specifically designed for this task.",
"title": ""
},
{
"docid": "neg:1840478_9",
"text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.",
"title": ""
},
{
"docid": "neg:1840478_10",
"text": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.",
"title": ""
},
{
"docid": "neg:1840478_11",
"text": "ing audit logs",
"title": ""
},
{
"docid": "neg:1840478_12",
"text": "Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.",
"title": ""
},
{
"docid": "neg:1840478_13",
"text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.",
"title": ""
},
{
"docid": "neg:1840478_14",
"text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.",
"title": ""
},
{
"docid": "neg:1840478_15",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "neg:1840478_16",
"text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.",
"title": ""
},
{
"docid": "neg:1840478_17",
"text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.",
"title": ""
},
{
"docid": "neg:1840478_18",
"text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.",
"title": ""
},
{
"docid": "neg:1840478_19",
"text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.",
"title": ""
}
] |
1840479 | Developing a Teacher Dashboard For Use with Intelligent Tutoring Systems | [
{
"docid": "pos:1840479_0",
"text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.",
"title": ""
},
{
"docid": "pos:1840479_1",
"text": "In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area.",
"title": ""
},
{
"docid": "pos:1840479_2",
"text": "Although learning with Intelligent Tutoring Systems (ITS) has been well studied, little research has investigated what role teachers can play, if empowered with data. Many ITSs provide student performance reports, but they may not be designed to serve teachers’ needs well, which is important for a well-designed dashboard. We investigated what student data is most helpful to teachers and how they use data to adjust and individualize instruction. Specifically, we conducted Contextual Inquiry interviews with teachers and used Interpretation Sessions and Affinity Diagramming to analyze the data. We found that teachers generate data on students’ concept mastery, misconceptions and errors, and utilize data provided by ITSs and other software. Teachers use this data to drive instruction and remediate issues on an individual and class level. Our study uncovers how data can support teachers in helping students learn and provides a solid foundation and recommendations for designing a teacher’s dashboard.",
"title": ""
},
{
"docid": "pos:1840479_3",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
}
] | [
{
"docid": "neg:1840479_0",
"text": "Eye gaze tracking system has been widely researched for the replacement of the conventional computer interfaces such as the mouse and keyboard. In this paper, we propose the long range binocular eye gaze tracking system that works from 1.5 m to 2.5 m with allowing a head displacement in depth. The 3D position of the user's eye is obtained from the two wide angle cameras. A high resolution image of the eye is captured using the pan, tilt, and focus controlled narrow angle camera. The angles for maneuvering the pan and tilt motor are calculated by the proposed calibration method based on virtual camera model. The performance of the proposed calibration method is verified in terms of speed and convenience through the experiment. The narrow angle camera keeps tracking the eye while the user moves his head freely. The point-of-gaze (POG) of each eye onto the screen is calculated by using a 2D mapping based gaze estimation technique and the pupil center corneal reflection (PCCR) vector. PCCR vector modification method is applied to overcome the degradation in accuracy with displacements of the head in depth. The final POG is obtained by the average of the two POGs. Experimental results show that the proposed system robustly works for a large screen TV from 1.5 m to 2.5 m distance with displacements of the head in depth (+20 cm) and the average angular error is 0.69°.",
"title": ""
},
{
"docid": "neg:1840479_1",
"text": "This paper focuses on approaches to building a text automatic summarization model for news articles, generating a one-sentence summarization that mimics the style of a news title given some paragraphs. We managed to build and train two relatively complex deep learning models that outperformed our baseline model, which is a simple feed forward neural network. We explored Recurrent Neural Network models with encoder-decoder using LSTM and GRU cells, and with/without attention. We obtained some results that we then measured by calculating their respective ROUGE scores with respect to the actual references. For future work, we believe abstractive method of text summarization is a power way of summarizing texts, and we will continue with this approach. We think that the deficiencies currently embedded in our language model can be improved by better fine-tuning the model, more deep-learning method exploration, as well as larger training dataset.",
"title": ""
},
{
"docid": "neg:1840479_2",
"text": "Cerebellar lesions can cause motor deficits and/or the cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome). We used voxel-based lesion-symptom mapping to test the hypothesis that the cerebellar motor syndrome results from anterior lobe damage whereas lesions in the posterolateral cerebellum produce the CCAS. Eighteen patients with isolated cerebellar stroke (13 males, 5 females; 20-66 years old) were evaluated using measures of ataxia and neurocognitive ability. Patients showed a wide range of motor and cognitive performance, from normal to severely impaired; individual deficits varied according to lesion location within the cerebellum. Patients with damage to cerebellar lobules III-VI had worse ataxia scores: as predicted, the cerebellar motor syndrome resulted from lesions involving the anterior cerebellum. Poorer performance on fine motor tasks was associated primarily with strokes affecting the anterior lobe extending into lobule VI, with right-handed finger tapping and peg-placement associated with damage to the right cerebellum, and left-handed finger tapping associated with left cerebellar damage. Patients with the CCAS in the absence of cerebellar motor syndrome had damage to posterior lobe regions, with lesions leading to significantly poorer scores on language (e.g. right Crus I and II extending through IX), spatial (bilateral Crus I, Crus II, and right lobule VIII), and executive function measures (lobules VII-VIII). These data reveal clinically significant functional regions underpinning movement and cognition in the cerebellum, with a broad anterior-posterior distinction. Motor and cognitive outcomes following cerebellar damage appear to reflect the disruption of different cerebro-cerebellar motor and cognitive loops.",
"title": ""
},
{
"docid": "neg:1840479_3",
"text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.",
"title": ""
},
{
"docid": "neg:1840479_4",
"text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.",
"title": ""
},
{
"docid": "neg:1840479_5",
"text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.",
"title": ""
},
{
"docid": "neg:1840479_6",
"text": "BACKGROUND\nDuplex ultrasound investigation has become the reference standard in assessing the morphology and haemodynamics of the lower limb veins. The project described in this paper was an initiative of the Union Internationale de Phlébologie (UIP). The aim was to obtain a consensus of international experts on the methodology to be used for assessment of anatomy of superficial and perforating veins in the lower limb by ultrasound imaging.\n\n\nMETHODS\nThe authors performed a systematic review of the published literature on duplex anatomy of the superficial and perforating veins of the lower limbs; afterwards they invited a group of experts from a wide range of countries to participate in this project. Electronic submissions from the authors and the experts (text and images) were made available to all participants via the UIP website. The authors prepared a draft document for discussion at the UIP Chapter meeting held in San Diego, USA in August 2003. Following this meeting a revised manuscript was circulated to all participants and further comments were received by the authors and included in subsequent versions of the manuscript. Eventually, all participants agreed the final version of the paper.\n\n\nRESULTS\nThe experts have made detailed recommendations concerning the methods to be used for duplex ultrasound examination as well as the interpretation of images and measurements obtained. This document provides a detailed methodology for complete ultrasound assessment of the anatomy of the superficial and perforating veins in the lower limbs.\n\n\nCONCLUSIONS\nThe authors and a large group of experts have agreed a methodology for the investigation of the lower limbs venous system by duplex ultrasonography, with specific reference to the anatomy of the main superficial veins and perforators of the lower limbs in healthy and varicose subjects.",
"title": ""
},
{
"docid": "neg:1840479_7",
"text": "Nowadays, Information Technology (IT) plays an important role in efficiency and effectiveness of the organizational performance. As an IT application, Enterprise Resource Planning (ERP) systems is considered one of the most important IT applications because it enables the organizations to connect and interact with its administrative units in order to manage data and organize internal procedures. Many institutions use ERP systems, most notably Higher Education Institutions (HEIs). However, many projects fail or exceed scheduling and budget constraints; the rate of failure in HEIs sector is higher than in other sectors. With HEIs’ recent movement to implement ERP systems and the lack of research studies examining successful implementation in HEIs, this paper provides a critical literature review with a special focus on Saudi Arabia. Further, it defines Critical Success Factors (CSFs) contributing to the success of ERP implementation in HEIs. This paper is part of a larger research effort aiming to provide guidelines and useful findings that help HEIs to manage the challenges for ERP systems and define CSFs that will help practitioners to implement them in the Saudi context.",
"title": ""
},
{
"docid": "neg:1840479_8",
"text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.",
"title": ""
},
{
"docid": "neg:1840479_9",
"text": "Video Question Answering is a challenging problem in visual information retrieval, which provides the answer to the referenced video content according to the question. However, the existing visual question answering approaches mainly tackle the problem of static image question, which may be ineffectively for video question answering due to the insufficiency of modeling the temporal dynamics of video contents. In this paper, we study the problem of video question answering by modeling its temporal dynamics with frame-level attention mechanism. We propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attention network to further improve the performance. We construct a large-scale video question answering dataset. We conduct the experiments on both multiple-choice and open-ended video question answering tasks to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "neg:1840479_10",
"text": "The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.",
"title": ""
},
{
"docid": "neg:1840479_11",
"text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.",
"title": ""
},
{
"docid": "neg:1840479_12",
"text": "Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outperform state-of-the-art methods for recognizing objects from novel viewpoints even when trained from just a single image per object. To further improve our performance on this task, we propose to take advantage of a supplementary dataset in which we observe a separate set of objects from multiple viewpoints. We introduce a new approach for training deep learning methods for instance recognition with limited training data, in which we use an auxiliary multi-view dataset to train our network to be robust to viewpoint changes. We find that this approach leads to a more robust classifier for recognizing objects from novel viewpoints, outperforming previous state-of-the-art approaches including keypoint-matching, template-based techniques, and sparse coding.",
"title": ""
},
{
"docid": "neg:1840479_13",
"text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"title": ""
},
{
"docid": "neg:1840479_14",
"text": "This paper presents a waveform modeling and generation method for speech bandwidth extension (BWE) using stacked dilated convolutional neural networks (CNNs) with causal or non-causal convolutional layers. Such dilated CNNs describe the predictive distribution for each wideband or high-frequency speech sample conditioned on the input narrowband speech samples. Distinguished from conventional frame-based BWE approaches, the proposed methods can model the speech waveforms directly and therefore avert the spectral conversion and phase estimation problems. Experimental results prove that the BWE methods proposed in this paper can achieve better performance than the state-of-the-art frame-based approach utilizing recurrent neural networks (RNNs) incorporating long shortterm memory (LSTM) cells in subjective preference tests.",
"title": ""
},
{
"docid": "neg:1840479_15",
"text": "In dialogical argumentation, it is often assumed that the involved parties will always correctly identify the intended statements posited by each other and realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views whenever new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, dropping some of them has become quite challenging. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed or disbelieved by a given person, offering us the possibility to express more than just three agreement states. It comes equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed. Thus, this approach is potentially more suitable for handling beliefs of the people that have not fully disclosed their opinions or counterarguments with respect to standard Dung’s semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including situations in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we will describe the results of an experiment in which participants were asked to judge dialogues in terms of agreement and structure. We will compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation. Keywords— Dialogical argumentation, probabilistic argumentation, abstract argumentation ∗This research is funded by EPSRC Project EP/N008294/1 “Framework for Computational Persuasion”.We thank the reviewers for their valuable comments that helped us to improve this paper.",
"title": ""
},
{
"docid": "neg:1840479_16",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840479_17",
"text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.",
"title": ""
},
{
"docid": "neg:1840479_18",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
}
] |
1840480 | Data Mining Techniques for Detecting Household Characteristics Based on Smart Meter Data | [
{
"docid": "pos:1840480_0",
"text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.",
"title": ""
},
{
"docid": "pos:1840480_1",
"text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.",
"title": ""
}
] | [
{
"docid": "neg:1840480_0",
"text": "This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2% accuracy with Long Short Term Memory (LSTM) without normalizer.",
"title": ""
},
{
"docid": "neg:1840480_1",
"text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these",
"title": ""
},
{
"docid": "neg:1840480_2",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "neg:1840480_3",
"text": "Mechanical properties of living cells are commonly described in terms of the laws of continuum mechanics. The purpose of this report is to consider the implications of an alternative approach that emphasizes the discrete nature of stress bearing elements in the cell and is based on the known structural properties of the cytoskeleton. We have noted previously that tensegrity architecture seems to capture essential qualitative features of cytoskeletal shape distortion in adherent cells (Ingber, 1993a; Wang et al., 1993). Here we extend those qualitative notions into a formal microstructural analysis. On the basis of that analysis we attempt to identify unifying principles that might underlie the shape stability of the cytoskeleton. For simplicity, we focus on a tensegrity structure containing six rigid struts interconnected by 24 linearly elastic cables. Cables carry initial tension (‘‘prestress’’) counterbalanced by compression of struts. Two cases of interconnectedness between cables and struts are considered: one where they are connected by pin-joints, and the other where the cables run through frictionless loops at the junctions. At the molecular level, the pinned structure may represent the case in which different cytoskeletal filaments are cross-linked whereas the looped structure represents the case where they are free to slip past one another. The system is then subjected to uniaxial stretching. Using the principal of virtual work, stretching force vs. extension and structural stiffness vs. stretching force relationships are calculated for different prestresses. The stiffness is found to increase with increasing prestress and, at a given prestress, to increase approximately linearly with increasing stretching force. This behavior is consistent with observations in living endothelial cells exposed to shear stresses (Wang & Ingber, 1994). At a given prestress, the pinned structure is found to be stiffer than the looped one, a result consistent with data on mechanical behavior of isolated, cross-linked and uncross-linked actin networks (Wachsstock et al., 1993). On the basis of our analysis we concluded that architecture and the prestress of the cytoskeleton might be key features that underlie a cell’s ability to regulate its shape. 7 1996 Academic Press Limited",
"title": ""
},
{
"docid": "neg:1840480_4",
"text": "The psychology of conspiracy theory beliefs is not yet well understood, although research indicates that there are stable individual differences in conspiracist ideation - individuals' general tendency to engage with conspiracy theories. Researchers have created several short self-report measures of conspiracist ideation. These measures largely consist of items referring to an assortment of prominent conspiracy theories regarding specific real-world events. However, these instruments have not been psychometrically validated, and this assessment approach suffers from practical and theoretical limitations. Therefore, we present the Generic Conspiracist Beliefs (GCB) scale: a novel measure of individual differences in generic conspiracist ideation. The scale was developed and validated across four studies. In Study 1, exploratory factor analysis of a novel 75-item measure of non-event-based conspiracist beliefs identified five conspiracist facets. The 15-item GCB scale was developed to sample from each of these themes. Studies 2, 3, and 4 examined the structure and validity of the GCB, demonstrating internal reliability, content, criterion-related, convergent and discriminant validity, and good test-retest reliability. In sum, this research indicates that the GCB is a psychometrically sound and practically useful measure of conspiracist ideation, and the findings add to our theoretical understanding of conspiracist ideation as a monological belief system unpinned by a relatively small number of generic assumptions about the typicality of conspiratorial activity in the world.",
"title": ""
},
{
"docid": "neg:1840480_5",
"text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method",
"title": ""
},
{
"docid": "neg:1840480_6",
"text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.",
"title": ""
},
{
"docid": "neg:1840480_7",
"text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.",
"title": ""
},
{
"docid": "neg:1840480_8",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "neg:1840480_9",
"text": "This paper presents a formal analysis of the train to trackside communication protocols used in the European Railway Tra c Management System (ERTMS) standard, and in particular the EuroRadio protocol. This protocol is used to secure important commands sent between train and trackside, such as movement authority and emergency stop messages. We perform our analysis using the applied pi-calculus and the ProVerif tool. This provides a powerful and expressive framework for protocol analysis and allows to check a wide range of security properties based on checking correspondence assertions. We show how it is possible to model the protocol’s counter-style timestamps in this framework. We define ProVerif assertions that allow us to check for secrecy of long and short term keys, authenticity of entities, message insertion, deletion, replay and reordering. We find that the protocol provides most of these security features, however it allows undetectable message deletion and the forging of emergency messages. We discuss the relevance of these results and make recommendations to further enhance the security of ERTMS.",
"title": ""
},
{
"docid": "neg:1840480_10",
"text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.",
"title": ""
},
{
"docid": "neg:1840480_11",
"text": "BACKGROUND\nEffective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.\n\n\nMETHODS\nA sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.\n\n\nRESULTS\nCompared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.\n\n\nCONCLUSION\nThe observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients.",
"title": ""
},
{
"docid": "neg:1840480_12",
"text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.",
"title": ""
},
{
"docid": "neg:1840480_13",
"text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.",
"title": ""
},
{
"docid": "neg:1840480_14",
"text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.",
"title": ""
},
{
"docid": "neg:1840480_15",
"text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.",
"title": ""
},
{
"docid": "neg:1840480_16",
"text": "for 4F2 DRAM Cell Array with sub 40 nm Technology Jae-Man Yoon, Kangyoon Lee, Seung-Bae Park, Seong-Goo Kim, Hyoung-Won Seo, Young-Woong Son, Bong-Soo Kim, Hyun-Woo Chung, Choong-Ho Lee*, Won-Sok Lee* *, Dong-Chan Kim* * *, Donggun Park*, Wonshik Lee and Byung-Il Ryu ATD Team, Device Research Team*, CAEP*, PD Team***, Semiconductor R&D Division, Samsung Electronics Co., San #24, Nongseo-Dong, Kiheung-Gu, Yongin-City, Kyunggi-Do, 449-711, Korea Tel) 82-31-209-4741, Fax) 82-31-209-3274, E-mail)",
"title": ""
},
{
"docid": "neg:1840480_17",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
},
{
"docid": "neg:1840480_18",
"text": "To demonstrate generality and to illustrate some additional properties of the method, we also apply the explanation method to a second domain: classifying news stories. The 20 newsgroups data set is a benchmark data set used in document classification research. It contains about 20,000 news items from 20 newsgroups representing different topics, and has a vocabulary of 26,214 different words (after stemming) (Lang 1995). The 20 topics can be categorized into seven top-level usenet categories with related news items: alternative (alt), computers (comp), miscellaneous (misc), recreation (rec), science (sci), society (soc), and talk (talk). One typical problem studied with this data set is to build classifiers to identify stories from these seven high-level news categories, which for our purposes gives a wide variety of different topics across which to provide document classification explanations. Looking at the seven high-level categories also provides realistic richness to the task: in many real document classification tasks, the class of interest is actually a collection (disjunction) of related concepts (consider, for example, “hate speech” in the safe-advertising domain).",
"title": ""
},
{
"docid": "neg:1840480_19",
"text": "By analogy with Internet of things, Internet of vehicles (IoV) that enables ubiquitous information exchange and content sharing among vehicles with little or no human intervention is a key enabler for the intelligent transportation industry. In this paper, we study how to combine both the physical and social layer information for realizing rapid content dissemination in device-to-device vehicle-to-vehicle (D2D-V2V)-based IoV networks. In the physical layer, headway distance of vehicles is modeled as a Wiener process, and the connection probability of D2D-V2V links is estimated by employing the Kolmogorov equation. In the social layer, the social relationship tightness that represents content selection similarities is obtained by Bayesian nonparametric learning based on real-world social big data, which are collected from the largest Chinese microblogging service Sina Weibo and the largest Chinese video-sharing site Youku. Then, a price-rising-based iterative matching algorithm is proposed to solve the formulated joint peer discovery, power control, and channel selection problem under various quality-of-service requirements. Finally, numerical results demonstrate the effectiveness and superiority of the proposed algorithm from the perspectives of weighted sum rate and matching satisfaction gains.",
"title": ""
}
] |
1840481 | Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences | [
{
"docid": "pos:1840481_0",
"text": "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and nonexpert readers in statistics, computer science, mathematics, and engineering.",
"title": ""
}
] | [
{
"docid": "neg:1840481_0",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "neg:1840481_1",
"text": "This article focuses on the ethical analysis of cyber warfare, the warfare characterised by the deployment of information and communication technologies. It addresses the vacuum of ethical principles surrounding this phenomenon by providing an ethical framework for the definition of such principles. The article is divided in three parts. The first one considers cyber warfare in relation to the so-called information revolution and provides a conceptual analysis of this kind of warfare. The second part focuses on the ethical problems posed by cyber warfare and describes the issues that arise when Just War Theory is endorsed to address them. The final part introduces Information Ethics as a suitable ethical framework for the analysis of cyber warfare, and argues that the vacuum of ethical principles for this kind warfare is overcome when Just War Theory and Information Ethics are merged together.",
"title": ""
},
{
"docid": "neg:1840481_2",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "neg:1840481_3",
"text": "A main puzzle of deep networks revolves around the apparent absence of overfitting intended as robustness of the expected error against overparametrization, despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to a gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. The result extends to deep nonlinear networks two key properties of gradient descent for linear networks, that have been recently recognized (1) to provide a form of implicit regularization: 1. For classification, which is the main application of today’s deep networks, there is asymptotic convergence to the maximum margin solution by minimization of loss functions such as the logistic, the cross entropy and the exp-loss . The maximum margin solution guarantees good classification error for “low noise” datasets. Importantly, this property holds independently of the initial conditions. Because of this property, our proposition guarantees a maximum margin solution also for deep nonlinear networks. 2. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the expected risk. This property, valid for the square loss and many other loss functions, is relevant especially for regression. In the case of deep nonlinear networks the solution however is not expected to be strictly minimum norm, unlike the linear case. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality.",
"title": ""
},
{
"docid": "neg:1840481_4",
"text": "This paper deals with a creation of the RGB-D database by using Microsoft Kinect device. One of the main uses of Kinect is measurement and subsequent creation the so-called depth maps of the 3D scenes. The maps obtained by Kinect can be improved. Existence of databases suitable for the experiment is very important for research. One of the possible research directions is use of infrared version of the investigated scene for improvement of the depth map. However, the databases of the Kinect data which would contain the corresponding infrared images do not exist. Therefore, our aim was to create such database. We want to increase the usability of the database by adding stereo images. Moreover, the same scenes were captured by Kinect v2. It was also investigated the impact of simultaneous use Kinect v1 and Kinect v2 to improve depth map investigated the scene. The database contains sequences of objects on turntable and simple scenes containing several objects.",
"title": ""
},
{
"docid": "neg:1840481_5",
"text": "We report on the quantitative determination of acetaminophen (paracetamol; NAPAP-d(0)) in human plasma and urine by GC-MS and GC-MS/MS in the electron-capture negative-ion chemical ionization (ECNICI) mode after derivatization with pentafluorobenzyl (PFB) bromide (PFB-Br). Commercially available tetradeuterated acetaminophen (NAPAP-d(4)) was used as the internal standard. NAPAP-d(0) and NAPAP-d(4) were extracted from 100-μL aliquots of plasma and urine with 300 μL ethyl acetate (EA) by vortexing (60s). After centrifugation the EA phase was collected, the solvent was removed under a stream of nitrogen gas, and the residue was reconstituted in acetonitrile (MeCN, 100 μL). PFB-Br (10 μL, 30 vol% in MeCN) and N,N-diisopropylethylamine (10 μL) were added and the mixture was incubated for 60 min at 30 °C. Then, solvents and reagents were removed under nitrogen and the residue was taken up with 1000 μL of toluene, from which 1-μL aliquots were injected in the splitless mode. GC-MS quantification was performed by selected-ion monitoring ions due to [M-PFB](-) and [M-PFB-H](-), m/z 150 and m/z 149 for NAPAP-d(0) and m/z 154 and m/z 153 for NAPAP-d(4), respectively. GC-MS/MS quantification was performed by selected-reaction monitoring the transition m/z 150 → m/z 107 and m/z 149 → m/z 134 for NAPAP-d(0) and m/z 154 → m/z 111 and m/z 153 → m/z 138 for NAPAP-d(4). The method was validated for human plasma (range, 0-130 μM NAPAP-d(0)) and urine (range, 0-1300 μM NAPAP-d(0)). Accuracy (recovery, %) ranged between 89 and 119%, and imprecision (RSD, %) was below 19% in these matrices and ranges. A close correlation (r>0.999) was found between the concentrations measured by GC-MS and GC-MS/MS. By this method, acetaminophen can be reliably quantified in small plasma and urine sample volumes (e.g., 10 μL). The analytical performance of the method makes it especially useful in pediatrics.",
"title": ""
},
{
"docid": "neg:1840481_6",
"text": "Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts. Our system then reconstructed an image of a new flammable object. Finally, the reconstructed image was superimposed on the input image to provide \"transparency\". The system imitates human learning about the laws of physics through experience by learning the shape of flammable objects and the flame characteristics.",
"title": ""
},
{
"docid": "neg:1840481_7",
"text": "A novel transition from rectangular waveguide to differential microstrip lines is illustrated in this paper. It transfers the dominant TE10 mode signal in a rectangular waveguide to a differential mode signal in the coupled microstrip lines. The common mode signal in the coupled microstrip lines is highly rejected. The transition was designed at 75 GHz, which is the center frequency of E band and simulated by a 3D EM simulator. It has a wide bandwidth of 19 GHz for -15 dB return loss of the waveguide port. Several prototypes of the transitions were fabricated and measured. The measurement results agree very well with the simulation. The compact size and the simple fabrication enable the transition to be employed in a number of millimeter-wave applications.",
"title": ""
},
{
"docid": "neg:1840481_8",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "neg:1840481_9",
"text": "In this paper, we propose a novel approach for traffic accident anticipation through (i) Adaptive Loss for Early Anticipation (AdaLEA) and (ii) a large-scale self-annotated incident database for anticipation. The proposed AdaLEA allows a model to gradually learn an earlier anticipation as training progresses. The loss function adaptively assigns penalty weights depending on how early the model can anticipate a traffic accident at each epoch. Additionally, we construct a Near-miss Incident DataBase for anticipation. This database contains an enormous number of traffic near-miss incident videos and annotations for detail evaluation of two tasks, risk anticipation and risk-factor anticipation. In our experimental results, we found our proposal achieved the highest scores for risk anticipation (+6.6% better on mean average precision (mAP) and 2.36 sec earlier than previous work on the average time-to-collision (ATTC)) and risk-factor anticipation (+4.3% better on mAP and 0.70 sec earlier than previous work on ATTC).",
"title": ""
},
{
"docid": "neg:1840481_10",
"text": "In order to cope with real-world problems more effectively, we tend to design a decision support system for tuberculosis bacterium class identification. In this paper, we are concerned to propose a fuzzy diagnosability approach, which takes value between {0, 1} and based on observability of events, we formalized the construction of diagnoses that are used to perform diagnosis. In particular, we present a framework of the fuzzy expert system; discuss the suitability of artificial intelligence as a novel soft paradigm and reviews work from the literature for the development of a medical diagnostic system. The newly proposed approach allows us to deal with problems of diagnosability for both crisp and fuzzy value of input data. Accuracy analysis of designed decision support system based on demographic data was done by comparing expert knowledge and system generated response. This basic emblematic approach using fuzzy inference system is presented that describes a technique to forecast the existence of bacterium and provides support platform to pulmonary researchers in identifying the ailment effectively.",
"title": ""
},
{
"docid": "neg:1840481_11",
"text": "Social network sites (SNS) have attracted considerable attention among teens and young adults who tend to connect and share common interest. Despite this popularity, the issue of students’ adoption of social network sites is still being unexplored fully in Malaysia. Driven by this factor, this study was designed to analyze the impact of social network sites on students’ academic performance in Malaysia. Using a conceptual approach, the study gathered that more students prefer the use of Facebook and Twitter in academic related discussions in complementingconventional classroom teaching and learning process. Thus, it is imperative that lecturers and academic institutions should implement the use of these applications in promoting academic excellence. As for profit oriented organizations such as bookshops, computer and smartphoneone vendors, they can promote their products through these applications and engage students to make purchases via them having understood that many students prefer and use Facebook, Twitter and Google+. The discussion from this study however does not represent the general sampling of Malaysian university students.",
"title": ""
},
{
"docid": "neg:1840481_12",
"text": "We present the design and evaluation of a multi-articular soft exosuit that is portable, fully autonomous, and provides assistive torques to the wearer at the ankle and hip during walking. Traditional rigid exoskeletons can be challenging to perfectly align with a wearer’s biological joints and can have large inertias, which can lead to the wearer altering their natural motion patterns. Exosuits, in comparison, use textiles to create tensile forces over the body in parallel with the muscles, enabling them to be light and not restrict the wearer’s kinematics. We describe the biologically inspired design and function of our exosuit, including a simplified model of the suit’s architecture and its interaction with the body. A key feature of the exosuit is that it can generate forces passively due to the body’s motion, similar to the body’s ligaments and tendons. These passively-generated forces can be supplemented by actively contracting Bowden cables using geared electric motors, to create peak forces in the suit of up to 200N. We define the suit-human series stiffness as an important parameter in the design of the exosuit and measure it on several subjects, and we perform human subjects testing to determine the biomechanical and physiological effects of the suit. Results from a five-subject study showed a minimal effect on gait kinematics and an average best-case metabolic reduction of 6.4%, comparing suit worn unpowered vs powered, during loaded walking with 34.6kg of carried mass including the exosuit and actuators (2.0kg on both legs, 10.1kg total).",
"title": ""
},
{
"docid": "neg:1840481_13",
"text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.",
"title": ""
},
{
"docid": "neg:1840481_14",
"text": "In 1991, a novel robot, MIT-MANUS, was introduced to study the potential that robots might assist in and quantify the neuro-rehabilitation of motor function. MIT-MANUS proved an excellent tool for shoulder and elbow rehabilitation in stroke patients, showing in clinical trials a reduction of impairment in movements confined to the exercised joints. This successful proof of principle as to additional targeted and intensive movement treatment prompted a test of robot training examining other limb segments. This paper focuses on a robot for wrist rehabilitation designed to provide three rotational degrees-of-freedom. The first clinical trial of the device will enroll 200 stroke survivors. Ultimately 160 stroke survivors will train with both the proximal shoulder and elbow MIT-MANUS robot, as well as with the novel distal wrist robot, in addition to 40 stroke survivor controls. So far 52 stroke patients have completed the robot training (ongoing protocol). Here, we report on the initial results on 36 of these volunteers. These results demonstrate that further improvement should be expected by adding additional training to other limb segments.",
"title": ""
},
{
"docid": "neg:1840481_15",
"text": "Chitosan was prepared from shrimp processing waste (shell) using the same chemical process as described for the other crustacean species with minor modification in the treatment condition. The physicochemical properties, molecular weight (165394g/mole), degree of deacetylation (75%), ash content as well as yield (15%) of prepared chitosan indicated that shrimp processing waste (shell) are a good source of chitosan. The water binding capacity (502%) and fat binding capacity (370%) of prepared chitosan are good agreement with the commercial chitosan. FT-IR spectra gave characteristics bands of –NH2 at 3443cm -1 and carbonyl at 1733cm. X-ray diffraction (XRD) patterns also indicated two characteristics crystalline peaks approximately at 10° and 20° (2θ).The surface morphology was examined using scanning electron microscopy (SEM). Index Term-Shrimp waste, Chitin, Deacetylation, Chitosan,",
"title": ""
},
{
"docid": "neg:1840481_16",
"text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"title": ""
},
{
"docid": "neg:1840481_17",
"text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.",
"title": ""
},
{
"docid": "neg:1840481_18",
"text": "Question answering over knowledge graph (QA-KG) aims to use facts in the knowledge graph (KG) to answer natural language questions. It helps end users more efficiently and more easily access the substantial and valuable knowledge in the KG, without knowing its data structures. QA-KG is a nontrivial problem since capturing the semantic meaning of natural language is difficult for a machine. Meanwhile, many knowledge graph embedding methods have been proposed. The key idea is to represent each predicate/entity as a low-dimensional vector, such that the relation information in the KG could be preserved. The learned vectors could benefit various applications such as KG completion and recommender systems. In this paper, we explore to use them to handle the QA-KG problem. However, this remains a challenging task since a predicate could be expressed in different ways in natural language questions. Also, the ambiguity of entity names and partial names makes the number of possible answers large. To bridge the gap, we propose an effective Knowledge Embedding based Question Answering (KEQA) framework. We focus on answering the most common types of questions, i.e., simple questions, in which each question could be answered by the machine straightforwardly if its single head entity and single predicate are correctly identified. To answer a simple question, instead of inferring its head entity and predicate directly, KEQA targets at jointly recovering the question's head entity, predicate, and tail entity representations in the KG embedding spaces. Based on a carefully-designed joint distance metric, the three learned vectors' closest fact in the KG is returned as the answer. Experiments on a widely-adopted benchmark demonstrate that the proposed KEQA outperforms the state-of-the-art QA-KG methods.",
"title": ""
},
{
"docid": "neg:1840481_19",
"text": "The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential \"insider\"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.",
"title": ""
}
] |
1840482 | Impedance Measurement System for Determination of Capacitive Electrode Coupling | [
{
"docid": "pos:1840482_0",
"text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.",
"title": ""
}
] | [
{
"docid": "neg:1840482_0",
"text": "More and more people rely on mobile devices to access the Internet, which also increases the amount of private information that can be gathered from people's devices. Although today's smartphone operating systems are trying to provide a secure environment, they fail to provide users with adequate control over and visibility into how third-party applications use their private data. Whereas there are a few tools that alert users when applications leak private information, these tools are often hard to use by the average user or have other problems. To address these problems, we present PrivacyGuard, an open-source VPN-based platform for intercepting the network traffic of applications. PrivacyGuard requires neither root permissions nor any knowledge about VPN technology from its users. PrivacyGuard does not significantly increase the trusted computing base since PrivacyGuard runs in its entirety on the local device and traffic is not routed through a remote VPN server. We implement PrivacyGuard on the Android platform by taking advantage of the VPNService class provided by the Android SDK.\n PrivacyGuard is configurable, extensible, and useful for many different purposes. We investigate its use for detecting the leakage of multiple types of sensitive data, such as a phone's IMEI number or location data. PrivacyGuard also supports modifying the leaked information and replacing it with crafted data for privacy protection. According to our experiments, PrivacyGuard can detect more leakage incidents by applications and advertisement libraries than TaintDroid. We also demonstrate that PrivacyGuard has reasonable overhead on network performance and almost no overhead on battery consumption.",
"title": ""
},
{
"docid": "neg:1840482_1",
"text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.",
"title": ""
},
{
"docid": "neg:1840482_2",
"text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.",
"title": ""
},
{
"docid": "neg:1840482_3",
"text": "Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.",
"title": ""
},
{
"docid": "neg:1840482_4",
"text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.",
"title": ""
},
{
"docid": "neg:1840482_5",
"text": "A great many control schemes for a robot manipulator interacting with the environment have been developed in the literature in the past two decades. This paper is aimed at presenting a survey of robot interaction control schemes for a manipulator, the end effector of which comes in contact with a compliant surface. A salient feature of the work is the implementation of the schemes on an industrial robot with open control architecture equipped with a wrist force sensor. Two classes of control strategies are considered, namely, those based on static model-based compensation and those based on dynamic model-based compensation. The former provide a good steadystate behavior, while the latter enhance the behavior during the transient. The performance of the various schemes is compared in the light of disturbance rejection, and a thorough analysis is developed by means of a number of case studies.",
"title": ""
},
{
"docid": "neg:1840482_6",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "neg:1840482_7",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "neg:1840482_8",
"text": "Millions of posts are being generated in real-time by users in social networking services, such as Twitter. However, a considerable number of those posts are mundane posts that are of interest to the authors and possibly their friends only. This paper investigates the problem of automatically discovering valuable posts that may be of potential interest to a wider audience. Specifically, we model the structure of Twitter as a graph consisting of users and posts as nodes and retweet relations between the nodes as edges. We propose a variant of the HITS algorithm for producing a static ranking of posts. Experimental results on real world data demonstrate that our method can achieve better performance than several baseline methods.",
"title": ""
},
{
"docid": "neg:1840482_9",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "neg:1840482_10",
"text": "Most existing methods for audio sentiment analysis use automatic speech recognition to convert speech to text, and feed the textual input to text-based sentiment classifiers. This study shows that such methods may not be optimal, and proposes an alternate architecture where a single keyword spotting system (KWS) is developed for sentiment detection. In the new architecture, the text-based sentiment classifier is utilized to automatically determine the most powerful sentiment-bearing terms, which is then used as the term list for KWS. In order to obtain a compact yet powerful term list, a new method is proposed to reduce text-based sentiment classifier model complexity while maintaining good classification accuracy. Finally, the term list information is utilized to build a more focused language model for the speech recognition system. The result is a single integrated solution which is focused on vocabulary that directly impacts classification. The proposed solution is evaluated on videos from YouTube.com and UT-Opinion corpus (which contains naturalistic opinionated audio collected in real-world conditions). Our experimental results show that the KWS based system significantly outperforms the traditional architecture in difficult practical tasks.",
"title": ""
},
{
"docid": "neg:1840482_11",
"text": "Electricity is a non-storable commodity for consumers, while hydropower producers may store future electricity as water in their reservoirs. Consequently, there is an asymmetry between producers’ and consumers’ possibilities of spot-futures arbitrage. Furthermore, marginal warehousing costs in hydro based electricity production are zero as long as water reservoirs are not full, jumping to the prevailing spot price in the case that dams are filled up and water is running over the edge without being utilised. In this explorative study, we analyse price relationships at the world’s largest multinational market place for electricity (Nord Pool). We find tha the futures price at Nord Pool periodically has been outside its (theoretical) arbitrage limits. Furthermore, the futures price and the basis have been biased and poor predictors of subsequent spot price levels and changes, respectively. Forecast errors have been systematic, and the futures price does not seem to incorporate available information. The findings indicate non-rational pricing behaviour. Alternatively, the results may represent circumstantial evidence of market power on the producer side.",
"title": ""
},
{
"docid": "neg:1840482_12",
"text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.",
"title": ""
},
{
"docid": "neg:1840482_13",
"text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.",
"title": ""
},
{
"docid": "neg:1840482_14",
"text": "The effective use of technologies supporting decision making is essential to companies’ survival. Recent studies analyzed social media technologies (SMT) in the context of smalland mediumsized enterprises (SMEs), contributing to the discussion on SMT benefits from the marketing perspective. This article focuses on the effects of SMT use on innovation. Our findings provide empirical evidence on the positive effects of SMT use for acquiring external information and for sharing knowledge and innovation performance.",
"title": ""
},
{
"docid": "neg:1840482_15",
"text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.",
"title": ""
},
{
"docid": "neg:1840482_16",
"text": "A boosting algorithm, AdaBoost.RT, is proposed for regression problems. The idea is to filter out examples with a relative estimation error that is higher than the pre-set threshold value, and then follow the AdaBoost procedure. Thus it requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Some experimental results using the M5 model tree as a weak learning machine for benchmark data sets and for hydrological modeling are reported, and compared to other boosting methods, bagging and artificial neural networks, and to a single M5 model tree. AdaBoost.Rt is proved to perform better on most of the considered data sets.",
"title": ""
},
{
"docid": "neg:1840482_17",
"text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.",
"title": ""
},
{
"docid": "neg:1840482_18",
"text": "In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual’s handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.",
"title": ""
},
{
"docid": "neg:1840482_19",
"text": "Biometric recognition refers to the automated recognition of individuals based on their biological and behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objective of this paper is to document the significant progress that has been achieved in the field of biometric recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired under controlled environmental conditions from cooperative users. Despite this progress, a number of challenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome them, and highlight the research opportunities in this field. One of the foremost challenges is the design of robust algorithms for representing and matching biometric samples obtained from uncooperative subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition, fundamental questions such as the distinctiveness and persistence of biometric traits need greater attention. Problems related to the security of biometric data and robustness of the biometric system against spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability, user privacy concerns, integration with the end application, and return on investment have not been adequately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the above areas will not only lead to widespread adoption of this promising technology, but will also result in wider user acceptance and societal impact. c © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
1840483 | Smart grid standards for home and building automation | [
{
"docid": "pos:1840483_0",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
}
] | [
{
"docid": "neg:1840483_0",
"text": "This paper presents a flat, high gain, wide scanning, broadband continuous transverse stub (CTS) array. The design procedure, the fabrication, and an exhaustive antenna characterization are described in details. The array comprises 16 radiating slots and is fed by a corporate-feed network in hollow parallel plate waveguide (PPW) technology. A pillbox-based linear source illuminates the corporate network and allows for beam steering. The antenna is designed by using an ad hoc mode matching code recently developed for CTS arrays, providing design guidelines. The assembly technique ensures the electrical contact among the various stages of the network without using any electromagnetic choke and any bonding process. The main beam of the antenna is mechanically steered over ±40° in elevation, by moving a compact horn within the focal plane of the pillbox feeding system. Excellent performances are achieved. The features of the beam are stable within the design 27.5-31 GHz band and beyond, in the entire Ka-band (26.5-40 GHz). An antenna gain of about 29 dBi is measured at broadside at 29.25 GHz and scan losses lower than 2 dB are reported at ±40°. The antenna efficiency exceeds 80% in the whole scan range. The very good agreement between measurements and simulations validates the design procedure. The proposed design is suitable for Satcom Ka-band terminals in moving platforms, e.g., trains and planes, and also for mobile ground stations, as a multibeam sectorial antenna.",
"title": ""
},
{
"docid": "neg:1840483_1",
"text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.",
"title": ""
},
{
"docid": "neg:1840483_2",
"text": "Restoring nasal lining is one of the essential parts during reconstruction of full-thickness defects of the nose. Without a sufficient nasal lining the whole reconstruction will fail. Nasal lining has to sufficiently cover the shaping subsurface framework. But in addition, lining must not compromise or even block nasal ventilation. This article demonstrates different possibilities of lining reconstruction. The use of composite grafts for small rim defects is described. The limits and technical components for application of skin grafts are discussed. Then the advantages and limitations of endonasal, perinasal, and hingeover flaps are demonstrated. Strategies to restore lining with one or two forehead flaps are presented. Finally, the possibilities and technical aspects to reconstruct nasal lining with a forearm flap are demonstrated. Technical details are explained by intraoperative pictures. Clinical cases are shown to illustrate the different approaches and should help to understand the process of decision making. It is concluded that although the lining cannot be seen after reconstruction of the cover it remains one of the key components for nasal reconstruction. When dealing with full-thickness nasal defects, there is no way to avoid learning how to restore nasal lining.",
"title": ""
},
{
"docid": "neg:1840483_3",
"text": "We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL.",
"title": ""
},
{
"docid": "neg:1840483_4",
"text": "Abstract machines provide a certain separation between platform-dependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, program-independent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization",
"title": ""
},
{
"docid": "neg:1840483_5",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "neg:1840483_6",
"text": "The focus of this paper is to investigate how writing computer programs can help children develop their storytelling and creative writing abilities. The process of writing a program---coding---has long been considered only in terms of computer science, but such coding is also reflective of the imaginative and narrative elements of fiction writing workshops. Writing to program can also serve as programming to write, in which a child learns the importance of sequence, structure, and clarity of expression---three aspects characteristic of effective coding and good storytelling alike. While there have been efforts examining how learning to write code can be facilitated by storytelling, there has been little exploration as to how such creative coding can also be directed to teach students about the narrative and storytelling process. Using the introductory programming language Scratch, this paper explores the potential of having children create their own digital stories with the software and how the narrative structure of these stories offers kids the opportunity to better understand the process of expanding an idea into the arc of a story.",
"title": ""
},
{
"docid": "neg:1840483_7",
"text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.",
"title": ""
},
{
"docid": "neg:1840483_8",
"text": "With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.",
"title": ""
},
{
"docid": "neg:1840483_9",
"text": "The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.",
"title": ""
},
{
"docid": "neg:1840483_10",
"text": "The energy domain currently struggles with radical legal and technological changes, such as, smart meters. This results in new use cases which can be implemented based on business process technology. Understanding and automating business processes requires to model and test them. However, existing process testing approaches frequently struggle with the testing of process resources, such as ERP systems, and negative testing. Hence, this work presents a toolchain which tackles that limitations. The approach uses an open source process engine to generate event logs and applies process mining techniques in a novel way.",
"title": ""
},
{
"docid": "neg:1840483_11",
"text": "In this study, a photosynthesis-fermentation model was proposed to merge the positive aspects of autotrophs and heterotrophs. Microalga Chlorella protothecoides was grown autotrophically for CO(2) fixation and then metabolized heterotrophically for oil accumulation. Compared to typical heterotrophic metabolism, 69% higher lipid yield on glucose was achieved at the fermentation stage in the photosynthesis-fermentation model. An elementary flux mode study suggested that the enzyme Rubisco-catalyzed CO(2) re-fixation, enhancing carbon efficiency from sugar to oil. This result may explain the higher lipid yield. In this new model, 61.5% less CO(2) was released compared with typical heterotrophic metabolism. Immunoblotting and activity assay further showed that Rubisco functioned in sugar-bleaching cells at the fermentation stage. Overall, the photosynthesis-fermentation model with double CO(2) fixation in both photosynthesis and fermentation stages, enhances carbon conversion ratio of sugar to oil and thus provides an efficient approach for the production of algal lipid.",
"title": ""
},
{
"docid": "neg:1840483_12",
"text": "Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches.",
"title": ""
},
{
"docid": "neg:1840483_13",
"text": "A new control strategy for obtaining the maximum traction force of electric vehicles with individual rear-wheel drive is presented. A sliding-mode observer is proposed to estimate the wheel slip and vehicle velocity under unknown road conditions by measuring only the wheel speeds. The proposed observer is based on the LuGre dynamic friction model and allows the maximum transmissible torque for each driven wheel to be obtained instantaneously. The maximum torque can be determined at any operating point and road condition, thus avoiding wheel skid. The proposed strategy maximizes the traction force while avoiding tire skid by controlling the torque of each traction motor. Simulation results using a complete vehicle model under different road conditions are presented to validate the proposed strategy.",
"title": ""
},
{
"docid": "neg:1840483_14",
"text": "Spectral Matching (SM) is a computationally efficient approach to approximate the solution of pairwise matching problems that are np-hard. In this paper, we present a probabilistic interpretation of spectral matching schemes and derive a novel Probabilistic Matching (PM) scheme that is shown to outperform previous approaches. We show that spectral matching can be interpreted as a Maximum Likelihood (ML) estimate of the assignment probabilities and that the Graduated Assignment (GA) algorithm can be cast as a Maximum a Posteriori (MAP) estimator. Based on this analysis, we derive a ranking scheme for spectral matchings based on their reliability, and propose a novel iterative probabilistic matching algorithm that relaxes some of the implicit assumptions used in prior works. We experimentally show our approaches to outperform previous schemes when applied to exhaustive synthetic tests as well as the analysis of real image sequences.",
"title": ""
},
{
"docid": "neg:1840483_15",
"text": "Generalized frequency division multiplexing (GFDM) is a promising candidate waveform for next generation wireless communications systems. Unlike conventional orthogonal frequency division multiplexing (OFDM) based systems, it is a non-orthogonal waveform subject to inter-carrier and intersymbol interference. In multiple-input multiple-output (MIMO) systems, the additional inter-antenna interference also takes place. The presence of such three-dimensional interference challenges the receiver design. This paper addresses the MIMOGFDM channel estimation problem with the aid of known reference signals also referred as pilots. Specifically, the received signal is expressed as the joint effect of the pilot part, unknown data part and noise part. On top of this formulation, least squares (LS) and linear minimum mean square error (LMMSE) estimators are presented, while their performance is evaluated for various pilot arrangements.",
"title": ""
},
{
"docid": "neg:1840483_16",
"text": "An integrative computational methodology is developed for the management of nonpoint source pollution from watersheds. The associated decision support system is based on an interface between evolutionary algorithms (EAs) and a comprehensive watershed simulation model, and is capable of identifying optimal or near-optimal land use patterns to satisfy objectives. Specifically, a genetic algorithm (GA) is linked with the U.S. Department of Agriculture’s Soil and Water Assessment Tool (SWAT) for single objective evaluations, and a Strength Pareto Evolutionary Algorithm has been integrated with SWAT for multiobjective optimization. The model can be operated at a small spatial scale, such as a farm field, or on a larger watershed scale. A secondary model that also uses a GA is developed for calibration of the simulation model. Sensitivity analysis and parameterization are carried out in a preliminary step to identify model parameters that need to be calibrated. Application to a demonstration watershed located in Southern Illinois reveals the capability of the model in achieving its intended goals. However, the model is found to be computationally demanding as a direct consequence of repeated SWAT simulations during the search for favorable solutions. An artificial neural network (ANN) has been developed to mimic SWAT outputs and ultimately replace it during the search process. Replacement of SWAT by the ANN results in an 84% reduction in computational time required to identify final land use patterns. The ANN model is trained using a hybrid of evolutionary programming (EP) and the back propagation (BP) algorithms. The hybrid algorithm was found to be more effective and efficient than either EP or BP alone. Overall, this study demonstrates the powerful and multifaceted role that EAs and artificial intelligence techniques could play in solving the complex and realistic problems of environmental and water resources systems. CE Database subject headings: Algorithms; Neural networks; Watershed management; Pollution control; Calibration; Computation.",
"title": ""
},
{
"docid": "neg:1840483_17",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
},
{
"docid": "neg:1840483_18",
"text": "Software requirements specifications (SRS) are usually validated by inspections, in which several reviewers read all or part of the specification and search for defects. We hypothesize that diflerent methods for conducting these searches may have significantly diflerent rat es of success. Using a controlled experiment, we show that a Scenario-based detection method, in which each reviewer executes a specific procedure to discover a particular class of defects has a higher defect detection rate than either Ad Hoc or Checklist methods. We describe the design, execution, and analysis of the expem”ment so others may reproduce it and test our results for diflerent kinds of software developments and different populations of software engineers.",
"title": ""
},
{
"docid": "neg:1840483_19",
"text": "This review of research explores characteristics associated with massive open online courses (MOOCs). Three key characteristics are revealed: varied definitions of openness, barriers to persistence, and a distinct structure that takes the form as one of two pedagogical approaches. The concept of openness shifts among different MOOCs, models, researchers, and facilitators. The high dropout rates show that the barriers to learning are a significant challenge. Research has focused on engagement, motivation, and presence to mitigate risks of learner isolation. The pedagogical structure of the connectivist MOOC model (cMOOC) incorporates a social, distributed, networked approach and significant learner autonomy that is geared towards adult lifelong learners interested in personal or professional development. This connectivist approach relates to situated and social learning theories such as social constructivism (Kop, 2011). By contrast, the design of the Stanford Artificial Intelligence (AI) model (xMOOC) uses conventional directed instruction in the context of formal postsecondary educational institutions. This traditional pedagogical approach is categorized as cognitive-behaviorist (Rodriguez, 2012). These two distinct MOOC models attract different audiences, use different learning approaches, and employ different teaching methods. The purpose of this review is to synthesize the research describing the phenomenon of MOOCs in informal and postsecondary online learning. Massive open online courses (MOOCs) are a relatively new phenomenon sweeping higher education. By definition, MOOCs take place online. They could be affiliated with a university, but not necessarily. They are larger than typical college classes, sometimes much larger. They are open, which has multiple meanings evident in this research. While the literature is growing on this topic, it is yet limited. Scholars are taking notice of the literature around MOOCs in all its forms from conceptual to technical. Conference proceedings and magazine articles make up the majority of literature on MOOCs (Liyanagunawardena, Adams, & Williams, 2013). In order to better understand the characteristics associated with MOOCs, this review of literature focuses solely on original research published in scholarly journals. This emphasis on peer-reviewed research is an essential first step to form a more critical and comprehensive perspective by tempering the media hype. While most of the early scholarly research examines aspects of the cMOOC model, much of the hype and controversy surrounds the scaling innovation of the xMOOC model in postsecondary learning contexts. Naidu (2013) calls out the massive open online repetitions of failed pedagogy (MOORFAPs) and forecasts a transformation to massive open online learning opportunities (MOOLOs). Informed educators will be better equipped to make evidence-based decisions, foster the positive growth of this innovation, and adapt it for their own unique contexts. This research synthesis is framed by a withinand Journal of Interactive Online Learning Kennedy 2 between-study literature analysis (Onwuegbuzie, Leech, & Collins, 2012) and situated within the context of online teaching and learning.",
"title": ""
}
] |
1840484 | Computational modeling of synthetic microbial biofilms. | [
{
"docid": "pos:1840484_0",
"text": "High-performance computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL, two open-source toolkits that support this technique. In introducing PyCUDA and PyOpenCL, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static systems. The concept of RTCG is simple and easily implemented using existing, robust infrastructure. Nonetheless it is powerful enough to support (and encourage) the creation of custom application-specific tools by its users. The premise of the paper is illustrated by a wide range of examples where the technique has been applied with considerable success.",
"title": ""
}
] | [
{
"docid": "neg:1840484_0",
"text": "The automatic system for voice pathology assessment is one of the active areas for researchers in the recent years due to its benefits to the clinicians and presence of a significant number of dysphonic patients around the globe. In this paper, a voice disorder detection system is developed to differentiate between a normal and pathological voice signal. The system is implemented by applying the local binary pattern (LBP) operator on Mel-weighted spectrum of a signal. The LBP is considered as one of the sophisticated techniques for the image processing. The technique also provided very good results for voice pathology detection during this study. The English voice disorder database MEEI is used to evaluate the performance of the developed system. The results of the LBP operator based system are compared with MFCC and found to be better than MFCC. Key-Words: LBP operator, MFCC, Vocal fold disorders, Sustained vowel, MEEI database, disorder detection system.",
"title": ""
},
{
"docid": "neg:1840484_1",
"text": "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20], stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.",
"title": ""
},
{
"docid": "neg:1840484_2",
"text": "Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications.!Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study focuses on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary study to gather feedback from students regarding the developed application.",
"title": ""
},
{
"docid": "neg:1840484_3",
"text": "Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine (MVD-ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multiview learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.",
"title": ""
},
{
"docid": "neg:1840484_4",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "neg:1840484_5",
"text": "Due to their capacity-achieving property, polar codes have become one of the most attractive channel codes. To date, the successive-cancellation list (SCL) decoding algorithm is the primary approach that can guarantee outstanding error-correcting performance of polar codes. However, the hardware designs of the original SCL decoder have a large silicon area and a long decoding latency. Although some recent efforts can reduce either the area or latency of SCL decoders, these two metrics still cannot be optimized at the same time. This brief, for the first time, proposes a general log-likelihood-ratio (LLR) based SCL decoding algorithm with multibit decision. This new algorithm, referred to as LLR - 2K b-SCL, can determine 2K bits simultaneously for arbitrary K with the use of LLR messages. In addition, a reduced-data-width scheme is presented to reduce the critical path of the sorting block. Then, based on the proposed algorithm, a VLSI architecture of the new SCL decoder is developed. Synthesis results show that, for an example (1024, 512) polar code with list size 4, the proposed LLR - 2K b - SCL decoders achieve a significant reduction in both area and latency as compared to prior works. As a result, the hardware efficiencies of the proposed designs with K = 2 and 3 are 2.33 times and 3.32 times of that of the state-of-the-art works, respectively.",
"title": ""
},
{
"docid": "neg:1840484_6",
"text": "Many stream-based applications have sophisticated data processing requirements and real-time performance expectations that need to be met under high-volume, time-varying data streams. In order to address these challenges, we propose novel operator scheduling approaches that specify (1) which operators to schedule (2) in which order to schedule the operators, and (3) how many tuples to process at each execution step. We study our approaches in the context of the Aurora data stream manager. We argue that a fine-grained scheduling approach in combination with various scheduling techniques (such as batching of operators and tuples) can significantly improve system efficiency by reducing various system overheads. We also discuss application-aware extensions that make scheduling decisions according to per-application Quality of Service (QoS) specifications. Finally, we present prototype-based experimental results that characterize the efficiency and effectiveness of our approaches under various stream workloads and processing scenarios.",
"title": ""
},
{
"docid": "neg:1840484_7",
"text": "This paper is a work-in-progress account of ideas and propositions about resilience in socialecological systems. It articulates our understanding of how these complex systems change and what determines their ability to absorb disturbances in either their ecological or their social domains. We call them “propositions” because, although they are useful in helping us understand and compare different social-ecological systems, they are not sufficiently well defined to be considered formal hypotheses. These propositions were developed in two workshops, in 2003 and 2004, in which participants compared the dynamics of 15 case studies in a wide range of regions around the world. The propositions raise many questions, and we present a list of some that could help define the next phase of resilience-related research.",
"title": ""
},
{
"docid": "neg:1840484_8",
"text": "Information extraction and human collaboration techniques are widely applied in the construction of web-scale knowledge bases. However, these knowledge bases are often incomplete or uncertain. In this paper, we present ProbKB, a probabilistic knowledge base designed to infer missing facts in a scalable, probabilistic, and principled manner using a relational DBMS. The novel contributions we make to achieve scalability and high quality are: 1) We present a formal definition and a novel relational model for probabilistic knowledge bases. This model allows an efficient SQL-based inference algorithm for knowledge expansion that applies inference rules in batches; 2) We implement ProbKB on massive parallel processing databases to achieve further scalability; and 3) We combine several quality control methods that identify erroneous rules, facts, and ambiguous entities to improve the precision of inferred facts. Our experiments show that ProbKB system outperforms the state-of-the-art inference engine in terms of both performance and quality.",
"title": ""
},
{
"docid": "neg:1840484_9",
"text": "The success of text-based retrieval motivates us to investigate analogous techniques which can support the querying and browsing of image data. However, images differ significantly from text both syntactically and semantically in their mode of representing and expressing information. Thus, the generalization of information retrieval from the text domain to the image domain is non-trivial. This paper presents a framework for information retrieval in the image domain which supports content-based querying and browsing of images. A critical first step to establishing such a framework is to construct a codebook of \"keywords\" for images which is analogous to the dictionary for text documents. We refer to such \"keywords\" in the image domain as \"keyblocks.\" In this paper, we first present various approaches to generating a codebook containing keyblocks at different resolutions. Then we present a keyblock-based approach to content-based image retrieval. In this approach, each image is encoded as a set of one-dimensional index codes linked to the keyblocks in the codebook, analogous to considering a text document as a linear list of keywords. Generalizing upon text-based information retrieval methods, we then offer various techniques for image-based information retrieval. By comparing the performance of this approach with conventional techniques using color and texture features, we demonstrate the effectiveness of the keyblock-based approach to content-based image retrieval.",
"title": ""
},
{
"docid": "neg:1840484_10",
"text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.",
"title": ""
},
{
"docid": "neg:1840484_11",
"text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.",
"title": ""
},
{
"docid": "neg:1840484_12",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
},
{
"docid": "neg:1840484_13",
"text": "We present a solution to “Google Cloud and YouTube8M Video Understanding Challenge” that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.",
"title": ""
},
{
"docid": "neg:1840484_14",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840484_15",
"text": "As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues, and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, socialaware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test is introduced as well.",
"title": ""
},
{
"docid": "neg:1840484_16",
"text": "Recommendation System Using Collaborative Filtering by Yunkyoung Lee Collaborative filtering is one of the well known and most extensive techniques in recommendation system its basic idea is to predict which items a user would be interested in based on their preferences. Recommendation systems using collaborative filtering are able to provide an accurate prediction when enough data is provided, because this technique is based on the user’s preference. User-based collaborative filtering has been very successful in the past to predict the customer’s behavior as the most important part of the recommendation system. However, their widespread use has revealed some real challenges, such as data sparsity and data scalability, with gradually increasing the number of users and items. To improve the execution time and accuracy of the prediction problem, this paper proposed item-based collaborative filtering applying dimension reduction in a recommendation system. It demonstrates that the proposed approach can achieve better performance and execution time for the recommendation system in terms of existing challenges, according to evaluation metrics using Mean Absolute Error (MAE).",
"title": ""
},
{
"docid": "neg:1840484_17",
"text": "Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This web usability a user centered design approach is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author.",
"title": ""
},
{
"docid": "neg:1840484_18",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "neg:1840484_19",
"text": "Complex event processing has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. The goal is to extract patterns from such event streams in order to make informed decisions in real-time. However, networking latencies and even machine failure may cause events to arrive out-of-order at the event stream processing engine. In this work, we address the problem of processing event pattern queries specified over event streams that may contain out-of-order data. First, we analyze the problems state-of-the-art event stream processing technology would experience when faced with out-of-order data arrival. We then propose a new solution of physical implementation strategies for the core stream algebra operators such as sequence scan and pattern construction, including stack- based data structures and associated purge algorithms. Optimizations for sequence scan and construction as well as state purging to minimize CPU cost and memory consumption are also introduced. Lastly, we conduct an experimental study demonstrating the effectiveness of our approach.",
"title": ""
}
] |
1840485 | Understanding Graph Sampling Algorithms for Social Network Analysis | [
{
"docid": "pos:1840485_0",
"text": "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.",
"title": ""
},
{
"docid": "pos:1840485_1",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
}
] | [
{
"docid": "neg:1840485_0",
"text": "The bee genus Lasioglossum Curtis is a model taxon for studying the evolutionary origins of and reversals in eusociality. This paper presents a phylogenetic analysis of Lasioglossum species and subgenera based on a data set consisting of 1240 bp of the mitochondrial cytochrome oxidase I (COI) gene for seventy-seven taxa (sixty-six ingroup and eleven outgroup taxa). Maximum parsimony was used to analyse the data set (using PAUP*4.0) by a variety of weighting methods, including equal weights, a priori weighting and a posteriori weighting. All methods yielded roughly congruent results. Michener's Hemihalictus series was found to be monophyletic in all analyses but one, while his Lasioglossum series formed a basal, paraphyletic assemblage in all analyses but one. Chilalictus was consistently found to be a basal taxon of Lasioglossum sensu lato and Lasioglossum sensu stricto was found to be monophyletic. Within the Hemihalictus series, major lineages included Dialictus + Paralictus, the acarinate Evylaeus + Hemihalictus + Sudila and the carinate Evylaeus + Sphecodogastra. Relationships within the Hemihalictus series were highly stable to altered weighting schemes, while relationships among the basal subgenera in the Lasioglossum series (Lasioglossum s.s., Chilalictus, Parasphecodes and Ctenonomia) were unclear. The social parasite of Dialictus, Paralictus, is consistently and unambiguously placed well within Dialictus, thus rendering Dialictus paraphyletic. The implications of this for understanding the origins of social parasitism are discussed.",
"title": ""
},
{
"docid": "neg:1840485_1",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "neg:1840485_2",
"text": "We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [27, 28] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.",
"title": ""
},
{
"docid": "neg:1840485_3",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "neg:1840485_4",
"text": "The purposes of the study are to explore the effects among brand awareness, perceived quality, brand loyalty and customer purchase intention and mediating effects of perceived quality and brand loyalty on brand awareness and purchase intention. The samples are collected from cellular phone users living in Chiyi, and the research adopts regression analysis and mediating test to examine the hypotheses. The results are: (a) the relations among the brand awareness, perceived quality and brand loyalty for purchase intention are significant and positive effect, (b) perceived quality has a positive effect on brand loyalty, (c) perceived quality will meditate the effects between brand awareness and purchase intention, and (d) brand loyalty will mediate the effects between brand awareness and purchase intention. The study suggests that cellular phone manufacturers ought to build a brand and promote its brand awareness through sales promotion, advertising, and other marketing activities. When brand awareness is high, its brand loyalty will also increase. Consumers will evaluate perceived quality of a product from their purchase experience. As a result, brand loyalty and brand preference will increase and also purchase intention.",
"title": ""
},
{
"docid": "neg:1840485_5",
"text": "Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.",
"title": ""
},
{
"docid": "neg:1840485_6",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "neg:1840485_7",
"text": "For decades, studies of endocrine-disrupting chemicals (EDCs) have challenged traditional concepts in toxicology, in particular the dogma of \"the dose makes the poison,\" because EDCs can have effects at low doses that are not predicted by effects at higher doses. Here, we review two major concepts in EDC studies: low dose and nonmonotonicity. Low-dose effects were defined by the National Toxicology Program as those that occur in the range of human exposures or effects observed at doses below those used for traditional toxicological studies. We review the mechanistic data for low-dose effects and use a weight-of-evidence approach to analyze five examples from the EDC literature. Additionally, we explore nonmonotonic dose-response curves, defined as a nonlinear relationship between dose and effect where the slope of the curve changes sign somewhere within the range of doses examined. We provide a detailed discussion of the mechanisms responsible for generating these phenomena, plus hundreds of examples from the cell culture, animal, and epidemiology literature. We illustrate that nonmonotonic responses and low-dose effects are remarkably common in studies of natural hormones and EDCs. Whether low doses of EDCs influence certain human disorders is no longer conjecture, because epidemiological studies show that environmental exposures to EDCs are associated with human diseases and disabilities. We conclude that when nonmonotonic dose-response curves occur, the effects of low doses cannot be predicted by the effects observed at high doses. Thus, fundamental changes in chemical testing and safety determination are needed to protect human health.",
"title": ""
},
{
"docid": "neg:1840485_8",
"text": "In this paper, a changeable winding brushless DC (BLDC) motor for the expansion of the speed region is described. The changeable winding BLDC motor is driven by a large number of phase turns at low speeds and by a reduced number of turns at high speeds. For this reason, the section where the winding changes is very important. Ideally, the time at which the windings are to be converted should be same as the time at which the voltage changes. However, if this timing is not exactly synchronized, a large current is generated in the motor, and the demagnetization of the permanent magnet occurs. In addition, a large torque ripple is produced. In this paper, we describe the demagnetization of the permanent magnet in a fault situation when the windings change, and we suggest a design process to solve this problem.",
"title": ""
},
{
"docid": "neg:1840485_9",
"text": "Providing transactional primitives of NAND flash based solid state disks (SSDs) have demonstrated a great potential for high performance transaction processing and relieving software complexity. Similar with software solutions like write-ahead logging (WAL) and shadow paging, transactional SSD has two parts of overhead which include: 1) write overhead under normal condition, and 2) recovery overhead after power failures. Prior transactional SSD designs utilize out-of-band (OOB) area in flash pages to store transaction information to reduce the first part of overhead. However, they are required to scan a large part of or even whole SSD after power failures to abort unfinished transactions. Another limitation of prior approaches is the unicity of transactional primitive they provided. In this paper, we propose a new transactional SSD design named Möbius. Möbius provides different types of transactional primitives to support static and dynamic transactions separately. Möbius flash translation layer (mFTL), which combines normal FTL with transaction processing by storing mapping and transaction information together in a physical flash page as atom inode. By amortizing the cost of transaction processing with FTL persistence, MFTL achieve high performance in normal condition and does not increase write amplification ratio. After power failures, Möbius can leverage atom inode to eliminate unnecessary scanning and recover quickly. We implemented a prototype of Möbius and compare it with other state-of-art transactional SSD designs. Experimental results show that Möbius can at most 67% outperform in transaction throughput (TPS) and 29 times outperform in recovery time while still have similar or even better write amphfication ratio comparing with prior hardware approaches.",
"title": ""
},
{
"docid": "neg:1840485_10",
"text": "Gaussian process (GP) regression models make for powerful predictors in out of sample exercises, but cubic runtimes for dense matrix decompositions severely limit the size of data—training and testing—on which they can be deployed. That means that in computer experiment, spatial/geo-physical, and machine learning contexts, GPs no longer enjoy privileged status as data sets continue to balloon in size. We discuss an implementation of local approximate Gaussian process models, in the laGP package for R, that offers a particular sparse-matrix remedy uniquely positioned to leverage modern parallel computing architectures. The laGP approach can be seen as an update on the spatial statistical method of local kriging neighborhoods. We briefly review the method, and provide extensive illustrations of the features in the package through worked-code examples. The appendix covers custom building options for symmetric multi-processor and graphical processing units, and built-in wrapper routines that automate distribution over a simple network of workstations.",
"title": ""
},
{
"docid": "neg:1840485_11",
"text": "We present a patient with partial monosomy of the short arm of chromosome 18 caused by de novo translocation t(Y;18) and a generalized form of keratosis pilaris (keratosis pilaris affecting the skin follicles of the trunk, limbs and face-ulerythema ophryogenes). Two-color FISH with centromere-specific Y and 18 DNA probes identified the derivative chromosome 18 as a dicentric with breakpoints in p11.2 on both involved chromosomes. The patient had another normal Y chromosome. This is a third report the presence of a chromosome 18p deletion (and first case of a translocation involving 18p and a sex chromosome) with this genodermatosis. Our data suggest that the short arm of chromosome 18 is a candidate region for a gene causing keratosis pilaris. Unmasking of a recessive mutation at the disease locus by deletion of the wild type allele could be the cause of the recessive genodermatosis.",
"title": ""
},
{
"docid": "neg:1840485_12",
"text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.",
"title": ""
},
{
"docid": "neg:1840485_13",
"text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be",
"title": ""
},
{
"docid": "neg:1840485_14",
"text": "Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.",
"title": ""
},
{
"docid": "neg:1840485_15",
"text": "To recognize application of Artificial Neural Networks (ANNs) in weather forecasting, especially in rainfall forecasting a comprehensive literature review from 1923 to 2012 is done and presented in this paper. And it is found that architectures of ANN such as BPN, RBFN is best established to be forecast chaotic behavior and have efficient enough to forecast monsoon rainfall as well as other weather parameter prediction phenomenon over the smaller geographical region.",
"title": ""
},
{
"docid": "neg:1840485_16",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "neg:1840485_17",
"text": "Especially in times of high raw material prices as a result of limited availability of feed ingredients nutritionist look for ways to keep feed cost as low as possible. Part of this discussion is whether certain ingredients can be replaced by others while avoiding impairments of performance. This discussion sometimes includes the question whether supplemental methionine can be replaced by betaine. Use of supplemental methionine, choline and betaine is common in broiler diets. Biochemically, all three compounds can act as methyl group donors. Figure 1 illustrates metabolic pathways connecting choline, betaine and methionine. This chart shows that choline is transformed to betaine which can then deliver a CH3-group for methylation reactions. One of those reactions is the methylation of homocysteine to methionine. This reaction occurs as part of the homocysteine cycle, which continues by transferring the methyl group further and yielding homocysteine again. Thus, there is no net yield of methionine from this cycle, since it only functions to transport a methyl group.",
"title": ""
},
{
"docid": "neg:1840485_18",
"text": "Cryptanalysis identifies weaknesses of ciphers and investigates methods to exploit them in order to compute the plaintext and/or the secret cipher key. Exploitation is nontrivial and, in many cases, weaknesses have been shown to be effective only on reduced versions of the ciphers. In this paper we apply artificial neural networks to automatically “assist” cryptanalysts into exploiting cipher weaknesses. The networks are trained by providing data in a form that points out the weakness together with the encryption key, until the network is able to generalize and predict the key (or evaluate its likelihood) for any possible ciphertext. We illustrate the effectiveness of the approach through simple classical ciphers, by providing the first ciphertext-only attack on substitution ciphers based on neural networks.",
"title": ""
},
{
"docid": "neg:1840485_19",
"text": "Control over the motional degrees of freedom of atoms, ions, and molecules in a field-free environment enables unrivalled measurement accuracies but has yet to be applied to highly charged ions (HCIs), which are of particular interest to future atomic clock designs and searches for physics beyond the Standard Model. Here, we report on the Coulomb crystallization of HCIs (specifically 40Ar13+) produced in an electron beam ion trap and retrapped in a cryogenic linear radiofrequency trap by means of sympathetic motional cooling through Coulomb interaction with a directly laser-cooled ensemble of Be+ ions. We also demonstrate cooling of a single Ar13+ ion by a single Be+ ion—the prerequisite for quantum logic spectroscopy with a potential 10−19 accuracy level. Achieving a seven-orders-of-magnitude decrease in HCI temperature starting at megakelvin down to the millikelvin range removes the major obstacle for HCI investigation with high-precision laser spectroscopy.",
"title": ""
}
] |
1840486 | Semantic smart grid services: Enabling a standards-compliant Internet of energy platform with IEC 61850 and OPC UA | [
{
"docid": "pos:1840486_0",
"text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.",
"title": ""
},
{
"docid": "pos:1840486_1",
"text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.",
"title": ""
}
] | [
{
"docid": "neg:1840486_0",
"text": "Revelations of large scale electronic surveillance and data mining by governments and corporations have fueled increased adoption of HTTPS. We present a traffic analysis attack against over 6000 webpages spanning the HTTPS deployments of 10 widely used, industryleading websites in areas such as healthcare, finance, legal services and streaming video. Our attack identifies individual pages in the same website with 89% accuracy, exposing personal details including medical conditions, financial and legal affairs and sexual orientation. We examine evaluation methodology and reveal accuracy variations as large as 18% caused by assumptions affecting caching and cookies. We present a novel defense reducing attack accuracy to 27% with a 9% traffic increase, and demonstrate significantly increased effectiveness of prior defenses in our evaluation context, inclusive of enabled caching, user-specific cookies and pages within the same website.",
"title": ""
},
{
"docid": "neg:1840486_1",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "neg:1840486_2",
"text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.",
"title": ""
},
{
"docid": "neg:1840486_3",
"text": "Purpose – During the last decades, different quality management concepts, including total quality management (TQM), six sigma and lean, have been applied by many different organisations. Although much important work has been documented regarding TQM, six sigma and lean, a number of questions remain concerning the applicability of these concepts in various organisations and contexts. Hence, the purpose of this paper is to describe the similarities and differences between the concepts, including an evaluation and criticism of each concept. Design/methodology/approach – Within a case study, a literature review and face-to-face interviews in typical TQM, six sigma and lean organisations have been carried out. Findings – While TQM, six sigma and lean have many similarities, especially concerning origin, methodologies, tools and effects, they differ in some areas, in particular concerning the main theory, approach and the main criticism. The lean concept is slightly different from TQM and six sigma. However, there is a lot to gain if organisations are able to combine these three concepts, as they are complementary. Six sigma and lean are excellent road-maps, which could be used one by one or combined, together with the values in TQM. Originality/value – The paper provides guidance to organisations regarding the applicability and properties of quality concepts. Organisations need to work continuously with customer-orientated activities in order to survive; irrespective of how these activities are labelled. The paper will also serve as a basis for further research in this area, focusing on practical experience of these concepts.",
"title": ""
},
{
"docid": "neg:1840486_4",
"text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.",
"title": ""
},
{
"docid": "neg:1840486_5",
"text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.",
"title": ""
},
{
"docid": "neg:1840486_6",
"text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.",
"title": ""
},
{
"docid": "neg:1840486_7",
"text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION",
"title": ""
},
{
"docid": "neg:1840486_8",
"text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.",
"title": ""
},
{
"docid": "neg:1840486_9",
"text": "A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.",
"title": ""
},
{
"docid": "neg:1840486_10",
"text": "Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.",
"title": ""
},
{
"docid": "neg:1840486_11",
"text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840486_12",
"text": "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.",
"title": ""
},
{
"docid": "neg:1840486_13",
"text": "Euler Number is one of the most important characteristics in topology. In two-dimension digital images, the Euler characteristic is locally computable. The form of Euler Number formula is different under 4-connected and 8-connected conditions. Based on the definition of the Foreground Segment and Neighbor Number, a formula of the Euler Number computing is proposed and is proved in this paper. It is a new idea to locally compute Euler Number of 2D image.",
"title": ""
},
{
"docid": "neg:1840486_14",
"text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.",
"title": ""
},
{
"docid": "neg:1840486_15",
"text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.",
"title": ""
},
{
"docid": "neg:1840486_16",
"text": "In this paper, we propose a vision-based multiple lane boundaries detection and estimation structure that fuses the edge features and the high intensity features. Our approach utilizes a camera as the only input sensor. The application of Kalman filter for information fusion and tracking significantly improves the reliability and robustness of our system. We test our system on roads with different driving scenarios, including day, night, heavy traffic, rain, confusing textures and shadows. The feasibility of our approach is demonstrated by quantitative evaluation using manually labeled video clips.",
"title": ""
},
{
"docid": "neg:1840486_17",
"text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.",
"title": ""
},
{
"docid": "neg:1840486_18",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "neg:1840486_19",
"text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.",
"title": ""
}
] |
1840487 | Permission based Android security: Issues and countermeasures | [
{
"docid": "pos:1840487_0",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
}
] | [
{
"docid": "neg:1840487_0",
"text": "Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyber bullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in HindiEnglish code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.",
"title": ""
},
{
"docid": "neg:1840487_1",
"text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.",
"title": ""
},
{
"docid": "neg:1840487_2",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
},
{
"docid": "neg:1840487_3",
"text": "Inspired by “GoogleTM Sets”, we consider the problem of retrieving items from a concept or cluster, given a query consisting of a few items from that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a modelbased concept of a cluster and ranks items using a score which evaluates the marginal probability that each item belongs to a cluster containing the query items. For exponential family models with conjugate priors this marginal probability is a simple function of sufficient statistics. We focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to apply our algorithm to very large datasets. We evaluate our algorithm on three datasets: retrieving movies from EachMovie, finding completions of author sets from the NIPS dataset, and finding completions of sets of words appearing in the Grolier encyclopedia. We compare to Google TM Sets and show that Bayesian Sets gives very reasonable set completions.",
"title": ""
},
{
"docid": "neg:1840487_4",
"text": "Feature selection is widely used in preparing high-dimensional data for effective data mining. The explosive popularity of social media produces massive and high-dimensional data at an unprecedented rate, presenting new challenges to feature selection. Social media data consists of (1) traditional high-dimensional, attribute-value data such as posts, tweets, comments, and images, and (2) linked data that provides social context for posts and describes the relationships between social media users as well as who generates the posts, and so on. The nature of social media also determines that its data is massive, noisy, and incomplete, which exacerbates the already challenging problem of feature selection. In this article, we study a novel feature selection problem of selecting features for social media data with its social context. In detail, we illustrate the differences between attribute-value data and social media data, investigate if linked data can be exploited in a new feature selection framework by taking advantage of social science theories. We design and conduct experiments on datasets from real-world social media Web sites, and the empirical results demonstrate that the proposed framework can significantly improve the performance of feature selection. Further experiments are conducted to evaluate the effects of user--user and user--post relationships manifested in linked data on feature selection, and research issues for future work will be discussed.",
"title": ""
},
{
"docid": "neg:1840487_5",
"text": "The optical microscope remains a widely-used tool for diagnosis and quantitation of malaria. An automated system that can match the performance of well-trained technicians is motivated by a shortage of trained microscopists. We have developed a computer vision system that leverages deep learning to identify malaria parasites in micrographs of standard, field-prepared thick blood films. The prototype application diagnoses P. falciparum with sufficient accuracy to achieve competency level 1 in the World Health Organization external competency assessment, and quantitates with sufficient accuracy for use in drug resistance studies. A suite of new computer vision techniques-global white balance, adaptive nonlinear grayscale, and a novel augmentation scheme-underpin the system's state-of-the-art performance. We outline a rich, global training set; describe the algorithm in detail; argue for patient-level performance metrics for the evaluation of automated diagnosis methods; and provide results for P. falciparum.",
"title": ""
},
{
"docid": "neg:1840487_6",
"text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.",
"title": ""
},
{
"docid": "neg:1840487_7",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "neg:1840487_8",
"text": "In most real-world audio recordings, we encounter several types of audio events. In this paper, we develop a technique for detecting signature audio events, that is based on identifying patterns of occurrences of automatically learned atomic units of sound, which we call Acoustic Unit Descriptors or AUDs. Experiments show that the methodology works as well for detection of individual events and their boundaries in complex recordings.",
"title": ""
},
{
"docid": "neg:1840487_9",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "neg:1840487_10",
"text": "Level Converters are key components of multi-voltage based systems-on-chips. Recently, a great deal of research has been focused on power dissipation reduction using various types of level converters in multi-voltage systems. These level converters include either level up conversion or level down conversion. In this paper we propose a unique level converter called universal level converter (ULC). This level converter is capable of four types of level converting functions, such as up conversion, down conversion, passing and blocking. The universal level converter is simulated in CADENCE using 90nm PTM technology model files. Three types of analysis such as power, parametric and load analysis are performed on the proposed level converter. The power analysis results prove that the proposed level converter has an average power reduction of approximately 87.2% compared to other existing level converters at different technology nodes. The parametric analysis and load analysis show that the proposed level converter provides a stable output for input voltages as low as 0.6V with a varying load from 1fF-200fF. The universal level converter works at dual voltages of 1.2V and 1.02V (85% of Vddh) with VTH value for NMOS as 0.339V and for PMOS as -0.339V. The ULC has an average power consumption of 27.1μW at a load",
"title": ""
},
{
"docid": "neg:1840487_11",
"text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.",
"title": ""
},
{
"docid": "neg:1840487_12",
"text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.",
"title": ""
},
{
"docid": "neg:1840487_13",
"text": "Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.",
"title": ""
},
{
"docid": "neg:1840487_14",
"text": "This paper presents some of the findings from a recent project that conducted a virtual ethnographic study of three formal courses in higher education that use ‘Web 2.0’or social technologies for learning and teaching. It describes the pedagogies adopted within these courses, and goes on to explore some key themes emerging from the research and relating to the pedagogical use of weblogs and wikis in particular. These themes relate primarily to the academy’s tendency to constrain and contain the possibly more radical effects of these new spaces. Despite this, the findings present a range of student and tutor perspectives which show that these technologies have significant potential as new collaborative, volatile and challenging environments for formal learning.",
"title": ""
},
{
"docid": "neg:1840487_15",
"text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in",
"title": ""
},
{
"docid": "neg:1840487_16",
"text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852",
"title": ""
},
{
"docid": "neg:1840487_17",
"text": "Our generation has seen the boom and ubiquitous advent of Internet connectivity. Adversaries have been exploiting this omnipresent connectivity as an opportunity to launch cyber attacks. As a consequence, researchers around the globe devoted a big attention to data mining and machine learning with emphasis on improving the accuracy of intrusion detection system (IDS). In this paper, we present a few-shot deep learning approach for improved intrusion detection. We first trained a deep convolutional neural network (CNN) for intrusion detection. We then extracted outputs from different layers in the deep CNN and implemented a linear support vector machine (SVM) and 1-nearest neighbor (1-NN) classifier for few-shot intrusion detection. few-shot learning is a recently developed strategy to handle situation where training samples for a certain class are limited. We applied our proposed method to the two well-known datasets simulating intrusion in a military network: KDD 99 and NSL-KDD. These datasets are imbalanced, and some classes have much less training samples than others. Experimental results show that the proposed method achieved better performances than the state-of-the-art on those two datasets.",
"title": ""
},
{
"docid": "neg:1840487_18",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "neg:1840487_19",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
}
] |
1840488 | Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings | [
{
"docid": "pos:1840488_0",
"text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively.",
"title": ""
},
{
"docid": "pos:1840488_1",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
},
{
"docid": "pos:1840488_2",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] | [
{
"docid": "neg:1840488_0",
"text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: vijayssac.bhu@gmail.com; vijay.meena@icar.gov.in I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.",
"title": ""
},
{
"docid": "neg:1840488_1",
"text": "The estimation of effort involved in developing a software product plays an important role in determining the success or failure of the product. Project managers require a reliable approach for software effort estimation. It is especially important during the early stage of the software development life cycle. An accurate software effort estimation is a major concern in current industries. In this paper, the main goal is to estimate the effort required to develop various software projects using class point approach. Then optimization of the effort parameters is achieved using adaptive regression based Multi-Layer Perceptron (ANN) technique to obtain better accuracy. Furthermore, a comparative analysis of software effort estimation using Multi-Layer Perceptron (ANN) and Radial Basis Function Network (RBFN) has been provided. By estimating the software projects accurately, we can have softwares with acceptable quality within budget and on planned schedules.",
"title": ""
},
{
"docid": "neg:1840488_2",
"text": "For decades—even prior to its inception—AI has aroused both fear and excitement as humanity has contemplated creating machines like ourselves. Unfortunately, the misconception that “intelligent” artifacts should necessarily be human-like has largely blinded society to the fact that we have been achieving AI for some time. Although AI that surpasses human ability grabs headlines (think of Watson, Deep Mind, or alphaGo), AI has been a standard part of the industrial repertoire since at least the 1980s, with expert systems checking circuit boards and credit card transactions. Machine learning (ML) strategies for generating AI have also long been used, such as genetic algorithms for nding solutions to intractable computational problems like scheduling, and neural networks not only to model and understand human learning but also for basic industrial control, monitoring, and classi cation. In the 1990s, probabilistic and Bayesian methods revolutionized ML and opened the door to one of the most pervasive AI abilities now available: searching through massive troves of data. Innovations in AI and ML algorithms have extended our capacity to nd information in texts, allowing us to search photographs as well as both recorded and live video and audio. We can translate, transcribe, read lips, read emotions (including lying), forge signatures and other handwriting, and forge video. Yet, the downside of these bene ts is ever present. As we write this, allegations are circulating that the Standardizing Ethical Design for Artifi cial Intelligence and Autonomous Systems",
"title": ""
},
{
"docid": "neg:1840488_3",
"text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.",
"title": ""
},
{
"docid": "neg:1840488_4",
"text": "In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.",
"title": ""
},
{
"docid": "neg:1840488_5",
"text": "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.",
"title": ""
},
{
"docid": "neg:1840488_6",
"text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.",
"title": ""
},
{
"docid": "neg:1840488_7",
"text": "We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.",
"title": ""
},
{
"docid": "neg:1840488_8",
"text": "This paper presents the results of the WMT10 and MetricsMATR10 shared tasks,1 which included a translation task, a system combination task, and an evaluation task to investigate new MT metrics. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon’s Mechanical Turk.",
"title": ""
},
{
"docid": "neg:1840488_9",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "neg:1840488_10",
"text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.",
"title": ""
},
{
"docid": "neg:1840488_11",
"text": "Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented.",
"title": ""
},
{
"docid": "neg:1840488_12",
"text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.",
"title": ""
},
{
"docid": "neg:1840488_13",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "neg:1840488_14",
"text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.",
"title": ""
},
{
"docid": "neg:1840488_15",
"text": "The use of the social media sites are growing rapidly to interact with the communities and to share the ideas among others. It may happen that most of the people dislike the ideas of others person views and make the use of the offensive language in their posts. Due to these offensive terms, many people especially youth and teenagers try to adopt such language and spread over the social media sites which may significantly affect the others people innocent minds. As offensive terms increasingly use by the people in highly manner, it is difficult to find or classify such offensive terms in real day to day life. To overcome from these problem, the proposed system analyze the offensive language and can classify the offensive sentence on a particular topic discussion using the support vector machine (SVM) as supervised classification in the data mining. The proposed system also can find the potential user by means of whom the offensive language spread among others and define the comparative analysis of SVM with Naive Bayes technique.",
"title": ""
},
{
"docid": "neg:1840488_16",
"text": "Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.",
"title": ""
},
{
"docid": "neg:1840488_17",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "neg:1840488_18",
"text": "Smart homes have become increasingly popular for IoT products and services with a lot of promises for improving the quality of life of individuals. Nevertheless, the heterogeneous, dynamic, and Internet-connected nature of this environment adds new concerns as private data becomes accessible, often without the householders' awareness. This accessibility alongside with the rising risks of data security and privacy breaches, makes smart home security a critical topic that deserves scrutiny. In this paper, we present an overview of the privacy and security challenges directed towards the smart home domain. We also identify constraints, evaluate solutions, and discuss a number of challenges and research issues where further investigation is required.",
"title": ""
},
{
"docid": "neg:1840488_19",
"text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 1 I E E E While such techniques1 form the foundation for many contemporary software engineering practices, requirements analysis has to involve more than understanding and modeling the functions, data, and interfaces for a new system. In addition, the requirements engineer needs to explore alternatives and evaluate their feasibility and desirability with respect to business goals. For instance, suppose your task is to build a system to schedule meetings. First, you might want to explore whether the system should do most of the scheduling work or only record meetings. Then you might want to evaluate these requirements with respect to technical objectives (such as response time) and business objectives (such as meeting effectiveness, low costs, or system usability). Once you select an alternative to best meet overall objectives, you can further refine the meaning of terms such as “meeting,” “participant,” or “scheduling conflict.” You can also define the basic functions the system will support. The need to explore alternatives and evaluate them with respect to business objectives has led to research on goal-oriented analysis.2,3 We argue here that goal-oriented analysis complements and strengthens traditional requirements analysis techniques by offering a means for capturing and evaluating alternative ways of meeting business goals. The remainder of this article details the five main steps that comprise goal-oriented analysis. These steps include goal analysis, softgoal analysis, softgoal correlation analysis, goal correlation analysis, and evaluation of alterfeature",
"title": ""
}
] |
1840489 | Bringing Deep Learning at the Edge of Information-Centric Internet of Things | [
{
"docid": "pos:1840489_0",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
},
{
"docid": "pos:1840489_1",
"text": "In view of evolving the Internet infrastructure, ICN is promoting a communication model that is fundamentally different from the traditional IP address-centric model. The ICN approach consists of the retrieval of content by (unique) names, regardless of origin server location (i.e., IP address), application, and distribution channel, thus enabling in-network caching/replication and content-based security. The expected benefits in terms of improved data dissemination efficiency and robustness in challenging communication scenarios indicate the high potential of ICN as an innovative networking paradigm in the IoT domain. IoT is a challenging environment, mainly due to the high number of heterogeneous and potentially constrained networked devices, and unique and heavy traffic patterns. The application of ICN principles in such a context opens new opportunities, while requiring careful design choices. This article critically discusses potential ways toward this goal by surveying the current literature after presenting several possible motivations for the introduction of ICN in the context of IoT. Major challenges and opportunities are also highlighted, serving as guidelines for progress beyond the state of the art in this timely and increasingly relevant topic.",
"title": ""
},
{
"docid": "pos:1840489_2",
"text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.",
"title": ""
}
] | [
{
"docid": "neg:1840489_0",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "neg:1840489_1",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "neg:1840489_2",
"text": "This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot's coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions.",
"title": ""
},
{
"docid": "neg:1840489_3",
"text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.",
"title": ""
},
{
"docid": "neg:1840489_4",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "neg:1840489_5",
"text": "Feature selection is one of the techniques in machine learning for selecting a subset of relevant features namely variables for the construction of models. The feature selection technique aims at removing the redundant or irrelevant features or features which are strongly correlated in the data without much loss of information. It is broadly used for making the model much easier to interpret and increase generalization by reducing the variance. Regression analysis plays a vital role in statistical modeling and in turn for performing machine learning tasks. The traditional procedures such as Ordinary Least Squares (OLS) regression, Stepwise regression and partial least squares regression are very sensitive to random errors. Many alternatives have been established in the literature during the past few decades such as Ridge regression and LASSO and its variants. This paper explores the features of the popular regression methods, OLS regression, ridge regression and the LASSO regression. The performance of these procedures has been studied in terms of model fitting and prediction accuracy using real data and simulated environment with the help of R package.",
"title": ""
},
{
"docid": "neg:1840489_6",
"text": "User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.",
"title": ""
},
{
"docid": "neg:1840489_7",
"text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.",
"title": ""
},
{
"docid": "neg:1840489_8",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "neg:1840489_9",
"text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840489_10",
"text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.",
"title": ""
},
{
"docid": "neg:1840489_11",
"text": "Fasting has been practiced for millennia, but, only recently, studies have shed light on its role in adaptive cellular responses that reduce oxidative damage and inflammation, optimize energy metabolism, and bolster cellular protection. In lower eukaryotes, chronic fasting extends longevity, in part, by reprogramming metabolic and stress resistance pathways. In rodents intermittent or periodic fasting protects against diabetes, cancers, heart disease, and neurodegeneration, while in humans it helps reduce obesity, hypertension, asthma, and rheumatoid arthritis. Thus, fasting has the potential to delay aging and help prevent and treat diseases while minimizing the side effects caused by chronic dietary interventions.",
"title": ""
},
{
"docid": "neg:1840489_12",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
},
{
"docid": "neg:1840489_13",
"text": "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 81 1. 03 56 7v 2 [ cs .L G ] 2 5 N ov 2 01 8 BIOLOGICALLY-PLAUSIBLE LEARNING ALGORITHMS CAN SCALE TO LARGE DATASETS",
"title": ""
},
{
"docid": "neg:1840489_14",
"text": "We present a hydro-elastic actuator that has a linear spring intentionally placed in series between the hydraulic piston and actuator output. The spring strain is measured to get an accurate estimate of force. This measurement alone is used in PI feedback to control the force in the actuator. The spring allows for high force fidelity, good force control, minimum impedance, and large dynamic range. A third order linear actuator model is broken into two fundamental cases: fixed load – high force (forward transfer function), and free load – zero force (impedance). These two equations completely describe the linear characteristics of the actuator. This model is presented with dimensional analysis to allow for generalization. A prototype actuator that demonstrates force control and low impedance is also presented. Dynamic analysis of the prototype actuator correlates well with the linear mathematical model. This work done with hydraulics is an extension from previous work done with electro-mechanical actuators. Keywords— Series Elastic Actuator, Force Control, Hydraulic Force Control, Biomimetic Robots",
"title": ""
},
{
"docid": "neg:1840489_15",
"text": "A key issue in delay tolerant networks (DTN) is to find the right node to store and relay messages. We consider messages annotated with the unique keywords describing themessage subject, and nodes also adds keywords to describe their mission interests, priority and their transient social relationship (TSR). To offset resource costs, an incentive mechanism is developed over transient social relationships which enrich enroute message content and motivate better semantically related nodes to carry and forward messages. The incentive mechanism ensures avoidance of congestion due to uncooperative or selfish behavior of nodes.",
"title": ""
},
{
"docid": "neg:1840489_16",
"text": "Inelastic collisions between the galactic cosmic rays (GCRs) and the interstellar medium (ISM) are responsible for producing essentially all of the light elements Li, Be, and B (LiBeB) observed in the cosmic rays. Previous calculations (e.g., [1]) have shown that GCR fragmentation can explain the bulk of the existing LiBeB abundance in the present day Galaxy. However, elemental abundances of LiBeB in old halo stars indicate inconsistencies with this explanation. We have used a simple leaky-box model to predict the cosmic-ray elemental and isotopic abundances of LiBeB in the present epoch. We conducted a survey of recent scientific literature on fragmentation cross sections and have calculated the amount of uncertainty they introduce into our model. The predicted particle intensities of this model were compared with high energy (EisM=200-500 MeV/nucleon) cosmic-ray data from the Cosmic Ray Isotope Spectrometer (CRIS), which indicates fairly good agreement with absolute fluxes for Z?:. 5 and relative isotopic abundances for all LiBeB species.",
"title": ""
},
{
"docid": "neg:1840489_17",
"text": "Clustering is an important data mining task for exploration and visualization of different data types like news stories, scientific publications, weblogs, etc. Due to the evolving nature of these data, evolutionary clustering, also known as dynamic clustering, has recently emerged to cope with the challenges of mining temporally smooth clusters over time. A good evolutionary clustering algorithm should be able to fit the data well at each time epoch, and at the same time results in a smooth cluster evolution that provides the data analyst with a coherent and easily interpretable model. In this paper we introduce the temporal Dirichlet process mixture model (TDPM) as a framework for evolutionary clustering. TDPM is a generalization of the DPM framework for clustering that automatically grows the number of clusters with the data. In our framework, the data is divided into epochs; all data points inside the same epoch are assumed to be fully exchangeable, whereas the temporal order is maintained across epochs. Moreover, The number of clusters in each epoch is unbounded: the clusters can retain, die out or emerge over time, and the actual parameterization of each cluster can also evolve over time in a Markovian fashion. We give a detailed and intuitive construction of this framework using the recurrent Chinese restaurant process (RCRP) metaphor, as well as a Gibbs sampling algorithm to carry out posterior inference in order to determine the optimal cluster evolution. We demonstrate our model over simulated data by using it to build an infinite dynamic mixture of Gaussian factors, and over real dataset by using it to build a simple non-parametric dynamic clustering-topic model and apply it to analyze the NIPS12 document collection.",
"title": ""
},
{
"docid": "neg:1840489_18",
"text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.",
"title": ""
},
{
"docid": "neg:1840489_19",
"text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.",
"title": ""
}
] |
1840490 | Relative localization and communication module for small-scale multi-robot systems | [
{
"docid": "pos:1840490_0",
"text": "Most of the research in the field of robotics is focussed on solving the problem of Simultaneous Localization and Mapping(SLAM). In general the problem is solved using a single robot. In the article written by R. Grabowski, C. Paredis and P. Hkosla, called ”Heterogeneous Teams of Modular Robots for Mapping end Exploration” a novel localization method is presented based on multiple robots.[Grabowski, 2000] For this purpose the relative distance between the different robots is calculated. These measurements, together with the positions estimated using dead reckoning, are used to determine the most likely new positions of the agents. Knowing the positions is essential when pursuing accurate (team) mapping capabilities. The proposed method makes it possible for heterogeneous team of modular centimeter-scale robots to collaborate and map unexplored environments.",
"title": ""
}
] | [
{
"docid": "neg:1840490_0",
"text": "These last years, several new home automation boxes appeared on the market, the new radio-based protocols facilitating their deployment with respect to previously wired solutions. Coupled with the wider availability of connected objects, these protocols have allowed new users to set up home automation systems by themselves. In this paper, we relate an in situ observational study of these builders in order to understand why and how the smart habitats were developed and used. We led 10 semi-structured interviews in households composed of at least 2 adults and equipped for at least 1 year, and 47 home automation builders answered an online questionnaire at the end of the study. Our study confirms, specifies and exhibits additional insights about usages and means of end-user development in the context of home automation.",
"title": ""
},
{
"docid": "neg:1840490_1",
"text": "Accurate velocity estimation is an important basis for robot control, but especially challenging for highly elastically driven robots. These robots show large swing or oscillation effects if they are not damped appropriately during the performed motion. In this letter, we consider an ultralightweight tendon-driven series elastic robot arm equipped with low-resolution joint position encoders. We propose an adaptive Kalman filter for velocity estimation that is suitable for these kinds of robots with a large range of possible velocities and oscillation frequencies. Based on an analysis of the parameter characteristics of the measurement noise variance, an update rule based on the filter position error is developed that is easy to adjust for use with different sensors. Evaluation of the filter both in simulation and in robot experiments shows a smooth and accurate performance, well suited for control purposes.",
"title": ""
},
{
"docid": "neg:1840490_2",
"text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.",
"title": ""
},
{
"docid": "neg:1840490_3",
"text": "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.",
"title": ""
},
{
"docid": "neg:1840490_4",
"text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.",
"title": ""
},
{
"docid": "neg:1840490_5",
"text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.",
"title": ""
},
{
"docid": "neg:1840490_6",
"text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.",
"title": ""
},
{
"docid": "neg:1840490_7",
"text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.",
"title": ""
},
{
"docid": "neg:1840490_8",
"text": "In this paper we develop a new family of Ordered Weighted Averaging (OWA) operators. Weight vector is obtained from a desired orness of the operator. Using Faulhaber’s formulas we obtain direct and simple expressions for the weight vector without any iteration loop. With the exception of one weight, the remaining follow a straight line relation. As a result, a fast and robust algorithm is developed. The resulting weight vector is suboptimal according with the Maximum Entropy criterion, but it is very close to the optimal. Comparisons are done with other procedures.",
"title": ""
},
{
"docid": "neg:1840490_9",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "neg:1840490_10",
"text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.",
"title": ""
},
{
"docid": "neg:1840490_11",
"text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.",
"title": ""
},
{
"docid": "neg:1840490_12",
"text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.",
"title": ""
},
{
"docid": "neg:1840490_13",
"text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.",
"title": ""
},
{
"docid": "neg:1840490_14",
"text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.",
"title": ""
},
{
"docid": "neg:1840490_15",
"text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.",
"title": ""
},
{
"docid": "neg:1840490_16",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
},
{
"docid": "neg:1840490_17",
"text": "The internet connectivity of client software (e.g., apps running on phones and PCs), web sites, and online services provide an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called A/B tests, split tests, randomized experiments, control/treatment tests, and online field experiments. Unlike most data mining techniques for finding correlational patterns, controlled experiments allow establishing a causal relationship with high probability. Experimenters can utilize the Scientific Method to form a hypothesis of the form “If a specific change is introduced, will it improve key metrics?” and evaluate it with real users. The theory of a controlled experiment dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, and the topic of offline experiments is well developed in Statistics (Box 2005). Online Controlled Experiments started to be used in the late 1990s with the growth of the Internet. Today, many large sites, including Amazon, Bing, Facebook, Google, LinkedIn, and Yahoo! run thousands to tens of thousands of experiments each year testing user interface (UI) changes, enhancements to algorithms (search, ads, personalization, recommendation, etc.), changes to apps, content management system, etc. Online controlled experiments are now considered an indispensable tool, and their use is growing for startups and smaller websites. Controlled experiments are especially useful in combination with Agile software development (Martin 2008, Rubin 2012), Steve Blank’s Customer Development process (Blank 2005), and MVPs (Minimum Viable Products) popularized by Eric Ries’s Lean Startup (Ries 2011). Motivation and Background Many good resources are available with motivation and explanations about online controlled experiments (Siroker and Koomen 2013, Goward 2012, McFarland 2012, Schrage 2014, Kohavi, Longbotham and Sommerfield, et al. 2009, Kohavi, Deng and Longbotham, et al. 2014, Kohavi, Deng and Frasca, et al. 2013).",
"title": ""
},
{
"docid": "neg:1840490_18",
"text": "Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.",
"title": ""
},
{
"docid": "neg:1840490_19",
"text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.",
"title": ""
}
] |
1840491 | Features for Masking-Based Monaural Speech Separation in Reverberant Conditions | [
{
"docid": "pos:1840491_0",
"text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.",
"title": ""
}
] | [
{
"docid": "neg:1840491_0",
"text": "Here we critically review studies that used electroencephalography (EEG) or event-related potential (ERP) indices as a biomarker of Alzheimer's disease. In the first part we overview studies that relied on visual inspection of EEG traces and spectral characteristics of EEG. Second, we survey analysis methods motivated by dynamical systems theory (DST) as well as more recent network connectivity approaches. In the third part we review studies of sleep. Next, we compare the utility of early and late ERP components in dementia research. In the section on mismatch negativity (MMN) studies we summarize their results and limitations and outline the emerging field of computational neurology. In the following we overview the use of EEG in the differential diagnosis of the most common neurocognitive disorders. Finally, we provide a summary of the state of the field and conclude that several promising EEG/ERP indices of synaptic neurotransmission are worth considering as potential biomarkers. Furthermore, we highlight some practical issues and discuss future challenges as well.",
"title": ""
},
{
"docid": "neg:1840491_1",
"text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.",
"title": ""
},
{
"docid": "neg:1840491_2",
"text": "Microservices have become a popular pattern for deploying scale-out application logic and are used at companies like Netflix, IBM, and Google. An advantage of using microservices is their loose coupling, which leads to agile and rapid evolution, and continuous re-deployment. However, developers are tasked with managing this evolution and largely do so manually by continuously collecting and evaluating low-level service behaviors. This is tedious, error-prone, and slow. We argue for an approach based on service evolution modeling in which we combine static and dynamic information to generate an accurate representation of the evolving microservice-based system. We discuss how our approach can help engineers manage service upgrades, architectural evolution, and changing deployment trade-offs.",
"title": ""
},
{
"docid": "neg:1840491_3",
"text": "Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent codebased schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to its promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.",
"title": ""
},
{
"docid": "neg:1840491_4",
"text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.",
"title": ""
},
{
"docid": "neg:1840491_5",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
},
{
"docid": "neg:1840491_6",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "neg:1840491_7",
"text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.",
"title": ""
},
{
"docid": "neg:1840491_8",
"text": "Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper we explore all three of these building blocks and propose variations for each that can lead to significantly faster BCD methods. We (i) propose new greedy block-selection strategies that guarantee more progress per iteration than the Gauss-Southwell rule; (ii) explore practical issues like how to implement the new rules when using “variable” blocks; (iii) explore the use of message-passing to compute matrix or Newton updates efficiently on huge blocks for problems with a sparse dependency between variables; and (iv) consider optimal active manifold identification, which leads to bounds on the “active-set complexity” of BCD methods and leads to superlinear convergence for certain problems with sparse solutions (and in some cases finite termination at an optimal solution). We support all of our findings with numerical results for the classic machine learning problems of least squares, logistic regression, multi-class logistic regression, label propagation, and L1-regularization.",
"title": ""
},
{
"docid": "neg:1840491_9",
"text": "Online dating sites have become popular platforms for people to look for potential romantic partners. It is important to understand users' dating preferences in order to make better recommendations on potential dates. The message sending and replying actions of a user are strong indicators for what he/she is looking for in a potential date and reflect the user's actual dating preferences. We study how users' online dating behaviors correlate with various user attributes using a real-world dateset from a major online dating site in China. Our study provides a firsthand account of the user online dating behaviors in China, a country with a large population and unique culture. The results can provide valuable guidelines to the design of recommendation engine for potential dates.",
"title": ""
},
{
"docid": "neg:1840491_10",
"text": "Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as stragglers. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adopt the framework of Tandon et al. [1] and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^{2})$ decoding algorithm. The idea is based on a suitably designed Reed-Solomon code that has a sparsest and balanced generator matrix. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.",
"title": ""
},
{
"docid": "neg:1840491_11",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "neg:1840491_12",
"text": "Region growing and edge detection are two popular and common techniques used for image segmentation. Region growing is preferred over edge detection methods because it is more robust against low contrast problems and effectively addresses the connectivity issues faced by edge detectors. Edgebased techniques, on the other hand, can significantly reduce useless information while preserving the important structural properties in an image. Recent studies have shown that combining region growing and edge methods for segmentation will produce much better results. This paper proposed using edge information to automatically select seed pixels and guide the process of region growing in segmenting geometric objects from an image. The geometric objects are songket motifs from songket patterns. Songket motifs are the main elements that decorate songket pattern. The beauty of songket lies in the elaborate design of the patterns and combination of motifs that are intricately woven on the cloth. After experimenting on thirty songket pattern images, the proposed method achieved promising extraction of the songket motifs.",
"title": ""
},
{
"docid": "neg:1840491_13",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "neg:1840491_14",
"text": "PURPOSE\nTo examine the feasibility and preliminary benefits of an integrative cognitive behavioral therapy (CBT) with adolescents with inflammatory bowel disease and anxiety.\n\n\nDESIGN AND METHODS\nNine adolescents participated in a CBT program at their gastroenterologist's office. Structured diagnostic interviews, self-report measures of anxiety and pain, and physician-rated disease severity were collected pretreatment and post-treatment.\n\n\nRESULTS\nPostintervention, 88% of adolescents were treatment responders, and 50% no longer met criteria for their principal anxiety disorder. Decreases were demonstrated in anxiety, pain, and disease severity.\n\n\nPRACTICE IMPLICATIONS\nAnxiety screening and a mental health referral to professionals familiar with medical management issues is important.",
"title": ""
},
{
"docid": "neg:1840491_15",
"text": "Sexual orientation is one of the largest sex differences in humans. The vast majority of the population is heterosexual, that is, they are attracted to members of the opposite sex. However, a small but significant proportion of people are bisexual or homosexual and experience attraction to members of the same sex. The origins of the phenomenon have long been the subject of scientific study. In this chapter, we will review the evidence that sexual orientation has biological underpinnings and consider the involvement of epigenetic mechanisms. We will first discuss studies that show that sexual orientation has a genetic component. These studies show that sexual orientation is more concordant in monozygotic twins than in dizygotic ones and that male sexual orientation is linked to several regions of the genome. We will then highlight findings that suggest a link between sexual orientation and epigenetic mechanisms. In particular, we will consider the case of women with congenital adrenal hyperplasia (CAH). These women were exposed to high levels of testosterone in utero and have much higher rates of nonheterosexual orientation compared to non-CAH women. Studies in animal models strongly suggest that the long-term effects of hormonal exposure (such as those experienced by CAH women) are mediated by epigenetic mechanisms. We conclude by describing a hypothetical framework that unifies genetic and epigenetic explanations of sexual orientation and the continued challenges facing sexual orientation research.",
"title": ""
},
{
"docid": "neg:1840491_16",
"text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.",
"title": ""
},
{
"docid": "neg:1840491_17",
"text": "Hardware technologies for trusted computing, or trusted execution environments (TEEs), have rapidly matured over the last decade. In fact, TEEs are at the brink of widespread commoditization with the recent introduction of Intel Software Guard Extensions (Intel SGX). Despite such rapid development of TEE, software technologies for TEE significantly lag behind their hardware counterpart, and currently only a select group of researchers have the privilege of accessing this technology. To address this problem, we develop an open source platform, called OpenSGX, that emulates Intel SGX hardware components at the instruction level and provides new system software components necessarily required for full TEE exploration. We expect that the OpenSGX framework can serve as an open platform for SGX research, with the following contributions. First, we develop a fully functional, instruction-compatible emulator of Intel SGX for enabling the exploration of software/hardware design space, and development of enclave programs. OpenSGX provides a platform for SGX development, meaning that it provides not just emulation but also operating system components, an enclave program loader/packager, an OpenSGX user library, debugging, and performance monitoring. Second, to show OpenSGX’s use cases, we applied OpenSGX to protect sensitive information (e.g., directory) of Tor nodes and evaluated their potential performance impacts. Therefore, we believe OpenSGX has great potential for broader communities to spark new research on soon-to-becommodity Intel SGX.",
"title": ""
},
{
"docid": "neg:1840491_18",
"text": "Penetration testing is widely used to help ensure the security of web applications. Using penetration testing, testers discover vulnerabilities by simulating attacks on a target web application. To do this efficiently, testers rely on automated techniques that gather input vector information about the target web application and analyze the application’s responses to determine whether an attack was successful. Techniques for performing these steps are often incomplete, which can leave parts of the web application untested and vulnerabilities undiscovered. This paper proposes a new approach to penetration testing that addresses the limitations of current techniques. The approach incorporates two recently developed analysis techniques to improve input vector identification and detect when attacks have been successful against a web application. This paper compares the proposed approach against two popular penetration testing tools for a suite of web applications with known and unknown vulnerabilities. The evaluation results show that the proposed approach performs a more thorough penetration testing and leads to the discovery of more vulnerabilities than both the tools. Copyright q 2011 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840491_19",
"text": "Learning a deep model from small data is yet an opening and challenging problem. We focus on one-shot classification by deep learning approach based on a small quantity of training samples. We proposed a novel deep learning approach named Local Contrast Learning (LCL) based on the key insight about a human cognitive behavior that human recognizes the objects in a specific context by contrasting the objects in the context or in her/his memory. LCL is used to train a deep model that can contrast the recognizing sample with a couple of contrastive samples randomly drawn and shuffled. On one-shot classification task on Omniglot, the deep model based LCL with 122 layers and 1.94 millions of parameters, which was trained on a tiny dataset with only 60 classes and 20 samples per class, achieved the accuracy 97.99% that outperforms human and state-of-the-art established by Bayesian Program Learning (BPL) trained on 964 classes. LCL is a fundamental idea which can be applied to alleviate parametric model’s overfitting resulted by lack of training samples.",
"title": ""
}
] |
1840492 | Origami Robot: A Self-Folding Paper Robot With an Electrothermal Actuator Created by Printing | [
{
"docid": "pos:1840492_0",
"text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.",
"title": ""
}
] | [
{
"docid": "neg:1840492_0",
"text": "When the cost of misclassifying a sample is high, it is useful to have an accurate estimate of uncertainty in the prediction for that sample. There are also multiple types of uncertainty which are best estimated in different ways, for example, uncertainty that is intrinsic to the training set may be well-handled by a Bayesian approach, while uncertainty introduced by shifts between training and query distributions may be better-addressed by density/support estimation. In this paper, we examine three types of uncertainty: model capacity uncertainty, intrinsic data uncertainty, and open set uncertainty, and review techniques that have been derived to address each one. We then introduce a unified hierarchical model, which combines methods from Bayesian inference, invertible latent density inference, and discriminative classification in a single end-to-end deep neural network topology to yield efficient per-sample uncertainty estimation. Our approach addresses all three uncertainty types and readily accommodates prior/base rates for binary detection.",
"title": ""
},
{
"docid": "neg:1840492_1",
"text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.",
"title": ""
},
{
"docid": "neg:1840492_2",
"text": "Recently, some publications indicated that the generative modeling approaches, i.e., topic models, achieved appreciated performance on multi-label classification, especially for skewed data sets. In this paper, we develop two supervised topic models for multi-label classification problems. The two models, i.e., Frequency-LDA (FLDA) and Dependency-Frequency-LDA (DFLDA), extend Latent Dirichlet Allocation (LDA) via two observations, i.e., the frequencies of the labels and the dependencies among different labels. We train the models by the Gibbs sampler algorithm. The experiment results on well known collections demonstrate that our two models outperform the state-of-the-art approaches. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840492_3",
"text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.",
"title": ""
},
{
"docid": "neg:1840492_4",
"text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.",
"title": ""
},
{
"docid": "neg:1840492_5",
"text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.",
"title": ""
},
{
"docid": "neg:1840492_6",
"text": "Sign language is important for facilitating communication between hearing impaired and the rest of society. Two approaches have traditionally been used in the literature: image-based and sensor-based systems. Sensor-based systems require the user to wear electronic gloves while performing the signs. The glove includes a number of sensors detecting different hand and finger articulations. Image-based systems use camera(s) to acquire a sequence of images of the hand. Each of the two approaches has its own disadvantages. The sensor-based method is not natural as the user must wear a cumbersome instrument while the imagebased system requires specific background and environmental conditions to achieve high accuracy. In this paper, we propose a new approach for Arabic Sign Language Recognition (ArSLR) which involves the use of the recently introduced Leap Motion Controller (LMC). This device detects and tracks the hand and fingers to provide position and motion information. We propose to use the LMC as a backbone of the ArSLR system. In addition to data acquisition, the system includes a preprocessing stage, a feature extraction stage, and a classification stage. We compare the performance of Multilayer Perceptron (MLP) neural networks with the Nave Bayes classifier. Using the proposed system on the Arabic sign alphabets gives 98% classification accuracy with the Nave Bayes classifier and more than 99% using the MLP.",
"title": ""
},
{
"docid": "neg:1840492_7",
"text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.",
"title": ""
},
{
"docid": "neg:1840492_8",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "neg:1840492_9",
"text": "Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new smartphone accelerometer dataset designed for activity recognition. The dataset includes 11,771 activities performed by 30 subjects of ages ranging from 18 to 60 years. Activities are divided in 17 fine grained classes grouped in two coarse grained classes: 9 types of activities of daily living (ADL) and 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with two different classifiers and with different configurations. The best results are achieved with k-NN classifying ADLs only, considering personalization, and with both windows of 51 and 151 samples.",
"title": ""
},
{
"docid": "neg:1840492_10",
"text": "Protein phase separation is implicated in formation of membraneless organelles, signaling puncta and the nuclear pore. Multivalent interactions of modular binding domains and their target motifs can drive phase separation. However, forces promoting the more common phase separation of intrinsically disordered regions are less understood, with suggested roles for multivalent cation-pi, pi-pi, and charge interactions and the hydrophobic effect. Known phase-separating proteins are enriched in pi-orbital containing residues and thus we analyzed pi-interactions in folded proteins. We found that pi-pi interactions involving non-aromatic groups are widespread, underestimated by force-fields used in structure calculations and correlated with solvation and lack of regular secondary structure, properties associated with disordered regions. We present a phase separation predictive algorithm based on pi interaction frequency, highlighting proteins involved in biomaterials and RNA processing.",
"title": ""
},
{
"docid": "neg:1840492_11",
"text": "This article presents a novel parallel spherical mechanism called Argos with three rotational degrees of freedom. Design aspects of the first prototype built of the Argos mechanism are discussed. The direct kinematic problem is solved, leading always to four nonsingular configurations of the end effector for a given set of joint angles. The inverse-kinematic problem yields two possible configurations for each of the three pantographs for a given orientation of the end effector. Potential applications of the Argos mechanism are robot wrists, orientable machine tool beds, joy sticks, surgical manipulators, and orientable units for optical components. Another pantograph based new structure named PantoScope having two rotational DoF is also briefly introduced. KEY WORDS—parallel robot, machine tool, 3 degree of freedom (DoF) wrist, pure orientation, direct kinematics, inverse kinematics, Pantograph based, Argos, PantoScope",
"title": ""
},
{
"docid": "neg:1840492_12",
"text": "ÐWhile empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behavior. This paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout. Index TermsÐQualitative methods, data collection, data analysis, experimental design, empirical software engineering, participant observation, interviewing.",
"title": ""
},
{
"docid": "neg:1840492_13",
"text": "5 Objectivity in parentheses 7 5.0 Illusion and Perception: the traditional approach . . . . . . . . . . . . . . . . . . . . . 7 5.1 An Invitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Objectivity in parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.3 The Universum versus the Multiversa . . . . . . . . . . . . . . . . . . . . . . . . . . . 8",
"title": ""
},
{
"docid": "neg:1840492_14",
"text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.",
"title": ""
},
{
"docid": "neg:1840492_15",
"text": "Software engineering is forecast to be among the fastest growing employment field in the next decades. The purpose of this investigation is two-fold: Firstly, empirical studies on the personality types of software professionals are reviewed. Secondly, this work provides an upto-date personality profile of software engineers according to the Myers–Briggs Type Indicator. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840492_16",
"text": "Values are presented for body constants based on a study of nine male white cadavers of normal appearance and average build. The limb data are supplemented by a further analysis of 11 upper and 41 lower limbs. Techniques used in the study form standard procedures that can be duplicated by subsequent workers. Each cadaver was measured, weighed, and somatotyped. Joints were placed in the midposition of the movement range and the body was frozen rigid. Joint angles were bisected in a systematic dismemberment procedure to produce unit segments. These segment lengths were weighed, measured for linear link dimensions, and analysed for segment volumes. The segment centers of mass were located relative to link end points as well as in relation to anatomical landmarks. Finally, each segment was dissected into its component parts and these were weighed. The specific gravity of each body part was calculated separately. Data are expressed in mean values together with standard deviations and, where available, are correlated and evaluated with other values in the literature. Data on the relative bulk of body segments have been scarce. Until recently, the only users of information dealing with the mass and proportion of the human figure have been sculptors and graphic artists. These people usually met their needs through canons of proportions and a trained perception rather than by actual measurement. There are no substitutes though for good empirical data when critical work on body mechanics or accurate analyses of human locomotion are attempted. During the past decade or so, the need for such information has been recognized specifically in designing prosthetic and orthotic devices for the limbs of handicapped persons, for sports analysis, for the construction of test dummies, such as those subjected to vehicular crashes, and for studies on the dynamics of body impacts in crashes and falls. The fundamental nature of data on the mass and dimensions of the body parts cannot be questioned. It is odd that even now there is such a dearth of information. The research literature up to the present contains usable body segment measurements from only 12 (or possibly 14) unpreserved and dismembered cadavers, all adult white males. A tabulation of data in an Air Force technical report (Dempster, '55a), dealing with seven specimens caAM. J. ANAT., 120: 33-54. daver by cadaver, was the first amplification of the scanty records in more than two generations. The tables on Michigan cadavers were reprinted by Krogman and Johnston ( '63) in an abridgment of the original report; Williams and Lisner ( '62) presented their own simplifications based on the same study; Barter ('57), Duggar ( '62) and Contini, Drillis, and Bluestein ( '63) have made tallys of data from the original tabulations along with parts of the older data. None of these studies gave any attention to the procedural distinctions between workers who had procured original data; one even grouped volumes and masses indiscriminately as masses. The Michigan data, however, have not been summarized nor evaluated up to this time. Since the procedures and, especially, the limiting conditions incidental to the gathering of body-segment data, have not been commented on critically since Braune and Fischer (1889), a comprehensive discussion of the entire problem at this point should help further work in this important area. 1 Supported in part by research grants from the Public Health Service National Institutes of Health (GM-07741-06). and from the office of Vocational Rehabilitation (RD-216 60-C), wlth support a dozen vears earlier' from a research contract with the Anthropometric Unit of the Wright Air Development Center Wright-Patterson Air Force Base, Dayton, Ohio (AF 18 (600)-43 Project no. 7414).",
"title": ""
},
{
"docid": "neg:1840492_17",
"text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).",
"title": ""
},
{
"docid": "neg:1840492_18",
"text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.",
"title": ""
}
] |
1840493 | Multi-Sentence Compression: Finding Shortest Paths in Word Graphs | [
{
"docid": "pos:1840493_0",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] | [
{
"docid": "neg:1840493_0",
"text": "Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.",
"title": ""
},
{
"docid": "neg:1840493_1",
"text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.",
"title": ""
},
{
"docid": "neg:1840493_2",
"text": "A multiple input, multiple output (MIMO) radar emits probings signals with multiple transmit antennas and records the reflections from targets with multiple receive antennas. Estimating the relative angles, delays, and Doppler shifts from the received signals allows to determine the locations and velocities of the targets. Standard approaches to MIMO radar based on digital matched filtering or compressed sensing only resolve the angle-delay-Doppler triplets on a (1/(NTNR), 1/B, 1/T ) grid, where NT and NR are the number of transmit and receive antennas, B is the bandwidth of the probing signals, and T is the length of the time interval over which the reflections are observed. In this work, we show that the continuous angle-delay-Doppler triplets and the corresponding attenuation factors can be recovered perfectly by solving a convex optimization problem. This result holds provided that the angle-delay-Doppler triplets are separated either by 10/(NTNR - 1) in angle, 10.01/B in delay, or 10.01/T in Doppler direction. Furthermore, this result is optimal (up to log factors) in the number of angle-delay-Doppler triplets that can be recovered.",
"title": ""
},
{
"docid": "neg:1840493_3",
"text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.",
"title": ""
},
{
"docid": "neg:1840493_4",
"text": "In the field of robots' obstacle avoidance and navigation, indirect contact sensors such as visual, ultrasonic and infrared detection are widely used. However, the performance of these sensors is always influenced by the severe environment, especially under the dark, dense fog, underwater conditions. The obstacle avoidance robot based on tactile sensor is proposed in this paper to realize the autonomous obstacle avoidance navigation by only using three dimensions force sensor. In addition, the mathematical model and algorithm are optimized to make up the deficiency of tactile sensor. Finally, the feasibility and reliability of this study are verified by the simulation results.",
"title": ""
},
{
"docid": "neg:1840493_5",
"text": "Adaptive cruise control is one of the most widely used vehicle driver assistance systems. However, uncertainty about drivers' lane change maneuvers in surrounding vehicles, such as unexpected cut-in, remains a challenge. We propose a novel adaptive cruise control framework combining convolution neural network (CNN)-based lane-change-intention inference and a predictive controller. We transform real-world driving data, collected on public roads with only standard production sensors, to a simplified bird's-eye view. This enables a CNN-based inference approach with low computational cost and robustness to noisy input. The predicted inference of traffic participants' lane change intention is utilized to improve safety and ride comfort with model predictive control. Simulation results based on driving scene reconstruction demonstrate the superior performance of inference using the proposed CNN-based approach, as well as enhanced safety and ride comfort.",
"title": ""
},
{
"docid": "neg:1840493_6",
"text": "An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.",
"title": ""
},
{
"docid": "neg:1840493_7",
"text": "M any security faculty members and practitioners bemoan the lack of good books in the field. Those of us who teach often find ourselves forced to rely on collections of papers to fortify our courses. In the last few years, however, we've started to see the appearance of some high-quality books to support our endeavors. Matt Bishop's book—Com-puter Security: Art and Science—is definitely hefty and packed with lots of information. It's a large book (with more than 1,000 pages), and it covers most any computer security topic that might be of interest. section discusses basic security issues at the definitional level. The Policy section addresses the relationship between policy and security, examining several types of policies in the process. Implementation I covers cryptography and its role in security. Implementation II describes how to apply policy requirements in systems. The Assurance section, which Elisabeth Sullivan wrote, introduces assurance basics and formal methods. The Special Topics section discusses malicious logic, vulnerability analysis , auditing, and intrusion detection. Finally, the Practicum ties all the previously discussed material to real-world examples. A ninth additional section, called End Matter, discusses miscellaneous supporting mathematical topics and concludes with an example. At a publisher's list price of US$74.99, you'll want to know why you should consider buying such an expensive book. Several things set it apart from other, similar, offerings. Most importantly , the book provides numerous examples and, refreshingly, definitions. A vertical bar alongside the examples distinguishes them from other text, so picking them out is easy. The book also includes a bibliography of over 1,000 references. Additionally, each chapter includes a summary, suggestions for further reading, research issues, and practice exercises. The format and layout are good, and the fonts are readable. The book is aimed at several audiences , and the preface describes many roadmaps, one of which discusses dependencies among the various chapters. Instructors can use it at the advanced undergraduate level or for introductory graduate-level computer-security courses. The preface also includes a mapping of suggested topics for undergraduate and graduate courses, presuming a certain amount of math and theoretical computer-science background as prerequisites. Practitioners can use the book as a resource for information on specific topics; the examples in the Practicum are ideally suited for them. So, what's the final verdict? Practitioners will want to consider this book as a reference to add to their bookshelves. Teachers of advanced undergraduate or introductory …",
"title": ""
},
{
"docid": "neg:1840493_8",
"text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.",
"title": ""
},
{
"docid": "neg:1840493_9",
"text": "Purpose of this study is to determine whether cash flow impacts business failure prediction using the BP models (Altman z-score, or Neural Network, or any of the BP models which could be implemented having objective to predict the financial distress or more complex financial failure-bankruptcy of the banks or companies). Units of analysis are financial ratios derived from raw financial data: B/S, P&L statements (income statements) and cash flow statements of both failed and non-failed companies/corporates that have been collected from the auditing resources and reports performed. A number of these studies examined whether a cash flow improve the prediction of business failure. The authors would have the objective to show the evidence and usefulness and efficacy of statistical models such as Altman Z-score discriminant analysis bankruptcy predictive models to assess client on going concern status. Failed and non-failed companies were selected for analysis to determine whether the cash flow improves the business failure prediction aiming to proof that the cash flow certainly makes better financial distress and bankruptcy prediction possible. Key-Words: bankruptcy prediction, financial distress, financial crisis, transition economy, auditing statement, balance sheet, profit and loss accounts, income statements",
"title": ""
},
{
"docid": "neg:1840493_10",
"text": "Widespread personalized computing systems play an already important and fast-growing role in diverse contexts, such as location-based services, recommenders, commercial Web-based services, and teaching systems. The personalization in these systems is driven by information about the user, a user model. Moreover, as computers become both ubiquitous and pervasive, personalization operates across the many devices and information stores that constitute the user's personal digital ecosystem. This enables personalization, and the user models driving it, to play an increasing role in people's everyday lives. This makes it critical to establish ways to address key problems of personalization related to privacy, invisibility of personalization, errors in user models, wasted user models, and the broad issue of enabling people to control their user models and associated personalization. We offer scrutable user models as a foundation for tackling these problems.\n This article argues the importance of scrutable user modeling and personalization, illustrating key elements in case studies from our work. We then identify the broad roles for scrutable user models. The article describes how to tackle the technical and interface challenges of designing and building scrutable user modeling systems, presenting design principles and showing how they were established over our twenty years of work on the Personis software framework. Our contributions are the set of principles for scrutable personalization linked to our experience from creating and evaluating frameworks and associated applications built upon them. These constitute a general approach to tackling problems of personalization by enabling users to scrutinize their user models as a basis for understanding and controlling personalization.",
"title": ""
},
{
"docid": "neg:1840493_11",
"text": "Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.",
"title": ""
},
{
"docid": "neg:1840493_12",
"text": "The unprecedented success of deep learning is largely dependent on the availability of massive amount of training data. In many cases, these data are crowd-sourced and may contain sensitive and confidential information, therefore, pose privacy concerns. As a result, privacy-preserving deep learning has been gaining increasing focus nowadays. One of the promising approaches for privacy-preserving deep learning is to employ differential privacy during model training which aims to prevent the leakage of sensitive information about the training data via the trained model. While these models are considered to be immune to privacy attacks, with the advent of recent and sophisticated attack models, it is not clear how well these models trade-off utility for privacy. In this paper, we systematically study the impact of a sophisticated machine learning based privacy attack called the membership inference attack against a state-of-the-art differentially private deep model. More specifically, given a differentially private deep model with its associated utility, we investigate how much we can infer about the model’s training data. Our experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10 and MNIST datasets and the corresponding classification tasks.",
"title": ""
},
{
"docid": "neg:1840493_13",
"text": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.",
"title": ""
},
{
"docid": "neg:1840493_14",
"text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.",
"title": ""
},
{
"docid": "neg:1840493_15",
"text": "The airline industry is undergoing a very difficult time and many companies are in search of service segmentation strategies that will satisfy different target market segments. This study attempts to identify the service dimensions that matter most to current airline passengers. The research measures and compares differences in passengers’ expectations of the desired airline service quality in terms of the dimensions of reliability; assurance; facilities; employees; flight patterns; customization and responsiveness. Primary data were collected from passengers departing Hong Kong airport. Regarding the service dimension expectations, differences analysis shows that there are no statistically significant differences between passengers who made their own airline choice (decision makers) and those who did not (non-decision makers). However, there are significant differences among passengers of different ethnic groups/nationalities as well as among passengers who travel for different purposes, such as business, holiday and visiting friends/relatives. The findings also indicate that passengers consistently rank ‘assurance’ as the most important service dimension. This indicates that passengers are concerned about the safety and security aspect and this may indicate why there has been such a downturn in demand as this study was conducted just prior to the World Trade Center incident on the 11th September 2001. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840493_16",
"text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.",
"title": ""
},
{
"docid": "neg:1840493_17",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "neg:1840493_18",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "neg:1840493_19",
"text": "Term structures of default probabilities are omnipresent in credit risk modeling: time-dynamic credit portfolio models, default times, and multi-year pricing models, they all need the time evolution of default probabilities as a basic model input. Although people tend to believe that from an economic point of view the Markov property as underlying model assumption is kind of questionable it seems to be common market practice to model PD term structures via Markov chain techniques. In this paper we illustrate that the Markov assumption carries us quite far if we allow for nonhomogeneous time behaviour of the Markov chain generating the PD term structures. As a ‘proof of concept’ we calibrate a nonhomogeneous continuous-time Markov chain (NHCTMC) to observed one-year rating migrations and multi-year default frequencies, hereby achieving convincing approximation quality. 1 Markov Chains in Credit Risk Modeling The probability of default (PD) for a client is a fundamental risk parameter in credit risk management. It is common practice to assign to every rating grade in a bank’s masterscale a one-year PD in line with regulatory requirements; see [1]. Table 1 shows an example for default frequencies assigned to rating grades from Standard and Poor’s (S&P). D AAA 0.00% AA 0.01% A 0.04% BBB 0.29% BB 1.28% B 6.24% CCC 32.35% Table 1: One-year default frequencies (D) assigned to S&P ratings; see [17], Table 9. Moreover, credit risk modeling concepts like dependent default times, multi-year credit pricing, and multi-horizon economic capital require more than just one-year PDs. For multi-year credit risk modeling, banks need a whole term structure (p R )t≥0 of (cumulative) PDs for every rating grade R; see, e.g., [2] for an introduction to PD term structures and [3] for their application to structured credit products. Every bank has its own (proprietary) way to calibrate PD term structures to bank-internal and external data. A look into the literature reveals that for the generation of PD term structures various Markov chain approaches, most often based on homogeneous chains, dominate current market practice. A landmarking paper in this direction is the work by Jarrow, Lando, and Turnbull [7]. Further research has been done by various authors, see, e.g., Kadam [8], Lando [10], Sarfaraz et al. [12], Schuermann and Jafry [14, 15], Trueck and Oezturkmen [18], just to mention a few examples. A new approach via Markov mixtures has been presented recently by Frydman and Schuermann [5]. In Markov chain theory (see [11]) one distinguishes between discrete-time and continuous-time chains. For instance, a discrete-time chain can be specified by a one-year migration or transition 1In the literature, PD term structures are sometimes called credit curves. 2A Markov chain is called homogeneous if transition probabilities do not depend on time.",
"title": ""
}
] |
1840494 | Unified Point-of-Interest Recommendation with Temporal Interval Assessment | [
{
"docid": "pos:1840494_0",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
}
] | [
{
"docid": "neg:1840494_0",
"text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.",
"title": ""
},
{
"docid": "neg:1840494_1",
"text": "Wireless sensor networks have become increasingly popular due to their wide range of applications. Energy consumption is one of the biggest constraints of the wireless sensor node and this limitation combined with a typical deployment of large number of nodes have added many challenges to the design and management of wireless sensor networks. They are typically used for remote environment monitoring in areas where providing electrical power is difficult. Therefore, the devices need to be powered by batteries and alternative energy sources. Because battery energy is limited, the use of different techniques for energy saving is one of the hottest topics in WSNs. In this work, we present a survey of power saving and energy optimization techniques for wireless sensor networks, which enhances the ones in existence and introduces the reader to the most well known available methods that can be used to save energy. They are analyzed from several points of view: Device hardware, transmission, MAC and routing protocols.",
"title": ""
},
{
"docid": "neg:1840494_2",
"text": "The advanced microgrid is envisioned to be a critical part of the future smart grid because of its local intelligence, automation, interoperability, and distributed energy resources (DER) hosting capability. The enabling technology of advanced microgrids is the microgrid management system (MGMS). In this article, we discuss and review the concept of the MGMS and state-of-the-art solutions regarding centralized and distributed MGMSs in the primary, secondary, and tertiary levels, from which we observe a general tendency toward decentralization.",
"title": ""
},
{
"docid": "neg:1840494_3",
"text": "Radio spectrum has become a precious resource, and it has long been the dream of wireless communication engineers to maximize the utilization of the radio spectrum. Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) have been considered promising to enhance the efficiency and utilization of the spectrum. In current overlay cognitive radio, spectrum sensing is first performed to detect the spectrum holes for the secondary user to harness. However, in a more sophisticated cognitive radio, the secondary user needs to detect more than just the existence of primary users and spectrum holes. For example, in a hybrid overlay/underlay cognitive radio, the secondary use needs to detect the transmission power and localization of the primary users as well. In this paper, we combine the spectrum sensing and primary user power/localization detection together, and propose to jointly detect not only the existence of primary users but the power and localization of them via compressed sensing. Simulation results including the miss detection probability (MDP), false alarm probability (FAP) and reconstruction probability (RP) confirm the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "neg:1840494_4",
"text": "Locally weighted projection regression is a new algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. This paper evaluates different methods of projection regression and derives a nonlinear function approximator based on them. This nonparametric local learning system i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic cross validation to learn iii) adjusts its weighting kernels based on local information only, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of possibly redundant inputs, as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.",
"title": ""
},
{
"docid": "neg:1840494_5",
"text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.",
"title": ""
},
{
"docid": "neg:1840494_6",
"text": "Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap.",
"title": ""
},
{
"docid": "neg:1840494_7",
"text": "Most state-of-the-art methods for representation learning are supervised, which require a large number of labeled data. This paper explores a novel unsupervised approach for learning visual representation. We introduce an image-wise discrimination criterion in addition to a pixel-wise reconstruction criterion to model both individual images and the difference between original images and reconstructed ones during neural network training. These criteria induce networks to focus on not only local features but also global high-level representations, so as to provide a competitive alternative to supervised representation learning methods, especially in the case of limited labeled data. We further introduce a competition mechanism to drive each component to increase its capability to win its adversary. In this way, the identity of representations and the likeness of reconstructed images to original ones are alternately improved. Experimental results on several tasks demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840494_8",
"text": "Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work. The MapNet project webpage is https://goo.gl/mRB3Au.",
"title": ""
},
{
"docid": "neg:1840494_9",
"text": "Although all the cells in an organism contain the same genetic information, differences in the cell phenotype arise from the expression of lineage-specific genes. During myelopoiesis, external differentiating signals regulate the expression of a set of transcription factors. The combined action of these transcription factors subsequently determines the expression of myeloid-specific genes and the generation of monocytes and macrophages. In particular, the transcription factor PU.1 has a critical role in this process. We review the contribution of several transcription factors to the control of macrophage development.",
"title": ""
},
{
"docid": "neg:1840494_10",
"text": "Text preprocessing and segmentation are critical tasks in search and text mining applications. Due to the huge amount of documents that are exclusively presented in PDF format, most of the Data Mining (DM) and Information Retrieval (IR) systems must extract content from the PDF files. In some occasions this is a difficult task: the result of the extraction process from a PDF file is plain text, and it should be returned in the same order as a human would read the original PDF file. However, current tools for PDF text extraction fail in this objective when working with complex documents with multiple columns. For instance, this is the case of official government bulletins with legal information. In this task, it is mandatory to get correct and ordered text as a result of the application of the PDF extractor. It is very usual that a legal article in a document refers to a previous article and they should be offered in the right sequential order. To overcome these difficulties we have designed a new method for extraction of text in PDFs that simulates the human reading order. We evaluated our method and compared it against other PDF extraction tools and algorithms. Evaluation of our approach shows that it significantly outperforms the results of the existing tools and algorithms.",
"title": ""
},
{
"docid": "neg:1840494_11",
"text": "The paper presents theoretical analyses, simulations and design of a PTAT (proportional to absolute temperature) temperature sensor that is based on the vertical PNP structure and dedicated to CMOS VLSI circuits. Performed considerations take into account specific properties of materials that forms electronic elements. The electrothermal simulations are performed in order to verify the unwanted self-heating effect of the sensor",
"title": ""
},
{
"docid": "neg:1840494_12",
"text": "The latest election cycle generated sobering examples of the threat that fake news poses to democracy. Primarily disseminated by hyper-partisan media outlets, fake news proved capable of becoming viral sensations that can dominate social media and influence elections. To address this problem, we begin with stance detection, which is a first step towards identifying fake news. The goal of this project is to identify whether given headline-article pairs: (1) agree, (2) disagree, (3) discuss the same topic, or (4) are not related at all, as described in [1]. Our method feeds the headline-article pairs into a bidirectional LSTM which first analyzes the article and then uses the acquired article representation to analyze the headline. On top of the output of the conditioned bidirectional LSTM, we concatenate global statistical features extracted from the headline-article pairs. We report a 9.7% improvement in the Fake News Challenge evaluation metric and a 22.7% improvement in mean F1 compared to the highest scoring baseline. We also present qualitative results that show how our method outperforms state-of-the art algorithms on this challenge.",
"title": ""
},
{
"docid": "neg:1840494_13",
"text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.",
"title": ""
},
{
"docid": "neg:1840494_14",
"text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.",
"title": ""
},
{
"docid": "neg:1840494_15",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "neg:1840494_16",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "neg:1840494_17",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "neg:1840494_18",
"text": "To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.",
"title": ""
},
{
"docid": "neg:1840494_19",
"text": "Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate a single high-resolution (HR) frame and run this scheme in a sliding window fashion over the entire video, effectively treating the problem as a large number of separate multi-frame super-resolution tasks. This approach has two main weaknesses: 1) Each input frame is processed and warped multiple times, increasing the computational cost, and 2) each output frame is estimated independently conditioned on the input frames, limiting the system's ability to produce temporally consistent results. In this work, we propose an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame. This naturally encourages temporally consistent results and reduces the computational cost by warping only one image in each step. Furthermore, due to its recurrent nature, the proposed method has the ability to assimilate a large number of previous frames without increased computational demands. Extensive evaluations and comparisons with previous methods validate the strengths of our approach and demonstrate that the proposed framework is able to significantly outperform the current state of the art.",
"title": ""
}
] |
1840495 | STEM education K-12: perspectives on integration | [
{
"docid": "pos:1840495_0",
"text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.",
"title": ""
}
] | [
{
"docid": "neg:1840495_0",
"text": "Autonomous robot manipulation often involves both estimating the pose of the object to be manipulated and selecting a viable grasp point. Methods using RGB-D data have shown great success in solving these problems. However, there are situations where cost constraints or the working environment may limit the use of RGB-D sensors. When limited to monocular camera data only, both the problem of object pose estimation and of grasp point selection are very challenging. In the past, research has focused on solving these problems separately. In this work, we introduce a novel method called SilhoNet that bridges the gap between these two tasks. We use a Convolutional Neural Network (CNN) pipeline that takes in region of interest (ROI) proposals to simultaneously predict an intermediate silhouette representation for objects with an associated occlusion mask. The 3D pose is then regressed from the predicted silhouettes. Grasp points from a precomputed database are filtered by back-projecting them onto the occlusion mask to find which points are visible in the scene. We show that our method achieves better overall performance than the state-of-the art PoseCNN network for 3D pose estimation on the YCB-video dataset.",
"title": ""
},
{
"docid": "neg:1840495_1",
"text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840495_2",
"text": "Symmetric ankle propulsion is the cornerstone of efficient human walking. The ankle plantar flexors provide the majority of the mechanical work for the step-to-step transition and much of this work is delivered via elastic recoil from the Achilles' tendon — making it highly efficient. Even though the plantar flexors play a central role in propulsion, body-weight support and swing initiation during walking, very few assistive devices have focused on aiding ankle plantarflexion. Our goal was to develop a portable ankle exoskeleton taking inspiration from the passive elastic mechanisms at play in the human triceps surae-Achilles' tendon complex during walking. The challenge was to use parallel springs to provide ankle joint mechanical assistance during stance phase but allow free ankle rotation during swing phase. To do this we developed a novel ‘smart-clutch’ that can engage and disengage a parallel spring based only on ankle kinematic state. The system is purely passive — containing no motors, electronics or external power supply. This ‘energy-neutral’ ankle exoskeleton could be used to restore symmetry and reduce metabolic energy expenditure of walking in populations with weak ankle plantar flexors (e.g. stroke, spinal cord injury, normal aging).",
"title": ""
},
{
"docid": "neg:1840495_3",
"text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.",
"title": ""
},
{
"docid": "neg:1840495_4",
"text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.",
"title": ""
},
{
"docid": "neg:1840495_5",
"text": "Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This \"Q Exactive\" instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an \"enhanced Fourier Transformation\" algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top 10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate- a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.",
"title": ""
},
{
"docid": "neg:1840495_6",
"text": "Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from “order effects” i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results; specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA’s parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands of Software Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM, or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE’s tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its “off-the-shelf” settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability.",
"title": ""
},
{
"docid": "neg:1840495_7",
"text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.",
"title": ""
},
{
"docid": "neg:1840495_8",
"text": "We describe and compare three predominant email sender authentication mechanisms based on DNS: SPF, DKIM and Sender-ID Framework (SIDF). These mechanisms are designed mainly to assist in filtering of undesirable email messages, in particular spam and phishing emails. We clarify the limitations of these mechanisms, identify risks, and make recommendations. In particular, we argue that, properly used, SPF and DKIM can both help improve the efficiency and accuracy of email filtering.",
"title": ""
},
{
"docid": "neg:1840495_9",
"text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.",
"title": ""
},
{
"docid": "neg:1840495_10",
"text": "Extracellular vesicles (EVs) are membrane-enclosed vesicles that are released into the extracellular environment by various cell types, which can be classified as apoptotic bodies, microvesicles and exosomes. EVs have been shown to carry DNA, small RNAs, proteins and membrane lipids which are derived from the parental cells. Recently, several studies have demonstrated that EVs can regulate many biological processes, such as cancer progression, the immune response, cell proliferation, cell migration and blood vessel tube formation. This regulation is achieved through the release and transport of EVs and the transfer of their parental cell-derived molecular cargo to recipient cells. This thereby influences various physiological and sometimes pathological functions within the target cells. While intensive investigation of EVs has focused on pathological processes, the involvement of EVs in normal wound healing is less clear; however, recent preliminarily investigations have produced some initial insights. This review will provide an overview of EVs and discuss the current literature regarding the role of EVs in wound healing, especially, their influence on coagulation, cell proliferation, migration, angiogenesis, collagen production and extracellular matrix remodelling.",
"title": ""
},
{
"docid": "neg:1840495_11",
"text": "STUDY DESIGN\nThis study used a prospective, single-group repeated-measures design to analyze differences between the electromyographic (EMG) amplitudes produced by exercises for the trapezius and serratus anterior muscles.\n\n\nOBJECTIVE\nTo identify high-intensity exercises that elicit the greatest level of EMG activity in the trapezius and serratus anterior muscles.\n\n\nBACKGROUND\nThe trapezius and serratus anterior muscles are considered to be the only upward rotators of the scapula and are important for normal shoulder function. Electromyographic studies have been performed for these muscles during active and low-intensity exercises, but they have not been analyzed during high intensity exercises.\n\n\nMETHODS AND MEASURES\nSurface electrodes recorded EMG activity of the upper, middle, and lower trapezius and serratus anterior muscles during 10 exercises in 30 healthy subjects.\n\n\nRESULTS\nThe unilateral shoulder shrug exercise was found to produce the greatest EMG activity in the upper trapezius. For the middle trapezius, the greatest EMG amplitudes were generated with 2 exercises: shoulder horizontal extension with external rotation and the overhead arm raise in line with the lower trapezius muscle in the prone position. The arm raise overhead exercise in the prone position produced the maximum EMG activity in the lower trapezius. The serratus anterior was activated maximally with exercises requiring a great amount of upward rotation of the scapula. The exercises were shoulder abduction in the plane of the scapula above 120 degrees and a diagonal exercise with a combination of shoulder flexion, horizontal flexion, and external rotation.\n\n\nCONCLUSION\nThis study identified exercises that maximally activate the trapezius and serratus anterior muscles. This information may be helpful for clinicians in developing exercise programs for these muscles.",
"title": ""
},
{
"docid": "neg:1840495_12",
"text": "The parallel data accesses inherent to large-scale data-intensive scientific computing require that data servers handle very high I/O concurrency. Concurrent requests from different processes or programs to hard disk can cause disk head thrashing between different disk regions, resulting in unacceptably low I/O performance. Current storage systems either rely on the disk scheduler at each data server, or use SSD as storage, to minimize this negative performance effect. However, the ability of the scheduler to alleviate this problem by scheduling requests in memory is limited by concerns such as long disk access times, and potential loss of dirty data with system failure. Meanwhile, SSD is too expensive to be widely used as the major storage device in the HPC environment. We propose iTransformer, a scheme that employs a small SSD to schedule requests for the data on disk. Being less space constrained than with more expensive DRAM, iTransformer can buffer larger amounts of dirty data before writing it back to the disk, or prefetch a larger volume of data in a batch into the SSD. In both cases high disk efficiency can be maintained even for concurrent requests. Furthermore, the scheme allows the scheduling of requests in the background to hide the cost of random disk access behind serving process requests. Finally, as a non-volatile memory, concerns about the quantity of dirty data are obviated. We have implemented iTransformer in the Linux kernel and tested it on a large cluster running PVFS2. Our experiments show that iTransformer can improve the I/O throughput of the cluster by 35% on average for MPI/IO benchmarks of various data access patterns.",
"title": ""
},
{
"docid": "neg:1840495_13",
"text": "Centrality is an important concept in the study of social network analysis (SNA), which is used to measure the importance of a node in a network. While many different centrality measures exist, most of them are proposed and applied to static networks. However, most types of networks are dynamic that their topology changes over time. A popular approach to represent such networks is to construct a sequence of time windows with a single aggregated static graph that aggregates all edges observed over some time period. In this paper, an approach which overcomes the limitation of this representation is proposed based on the notion of the time-ordered graph, to measure the communication centrality of a node in dynamic networks.",
"title": ""
},
{
"docid": "neg:1840495_14",
"text": "Network Functions Virtualization (NFV) has enabled operators to dynamically place and allocate resources for network services to match workload requirements. However, unbounded end-to-end (e2e) latency of Service Function Chains (SFCs) resulting from distributed Virtualized Network Function (VNF) deployments can severely degrade performance. In particular, SFC instantiations with inter-data center links can incur high e2e latencies and Service Level Agreement (SLA) violations. These latencies can trigger timeouts and protocol errors with latency-sensitive operations.\n Traditional solutions to reduce e2e latency involve physical deployment of service elements in close proximity. These solutions are, however, no longer viable in the NFV era. In this paper, we present our solution that bounds the e2e latency in SFCs and inter-VNF control message exchanges by creating micro-service aggregates based on the affinity between VNFs. Our system, Contain-ed, dynamically creates and manages affinity aggregates using light-weight virtualization technologies like containers, allowing them to be placed in close proximity and hence bounding the e2e latency. We have applied Contain-ed to the Clearwater IP Multimedia System and built a proof-of-concept. Our results demonstrate that, by utilizing application and protocol specific knowledge, affinity aggregates can effectively bound SFC delays and significantly reduce protocol errors and service disruptions.",
"title": ""
},
{
"docid": "neg:1840495_15",
"text": "We present drawing on air, a haptic-aided input technique for drawing controlled 3D curves through space. Drawing on air addresses a control problem with current 3D modeling approaches based on sweeping movement of the hands through the air. Although artists praise the immediacy and intuitiveness of these systems, a lack of control makes it nearly impossible to create 3D forms beyond quick design sketches or gesture drawings. Drawing on air introduces two new strategies for more controlled 3D drawing: one-handed drag drawing and two-handed tape drawing. Both approaches have advantages for drawing certain types of curves. We describe a tangent preserving method for transitioning between the two techniques while drawing. Haptic-aided redrawing and line weight adjustment while drawing are also supported in both approaches. In a quantitative user study evaluation by illustrators, the one and two-handed techniques performed at roughly the same level and both significantly outperformed freehand drawing and freehand drawing augmented with a haptic friction effect. We present the design and results of this experiment, as well as user feedback from artists and 3D models created in a style of line illustration for challenging artistic and scientific subjects.",
"title": ""
},
{
"docid": "neg:1840495_16",
"text": "Friction stir welding (FSW) is a relatively new joining process that has been used for high production since 1996. Because melting does not occur and joining takes place below the melting temperature of the material, a high-quality weld is created. In this paper working principle and various factor affecting friction stir welding is discussed.",
"title": ""
},
{
"docid": "neg:1840495_17",
"text": "This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.",
"title": ""
},
{
"docid": "neg:1840495_18",
"text": "On a gambling task that models real-life decisions, patients with bilateral lesions of the ventromedial prefrontal cortex (VM) opt for choices that yield high immediate gains in spite of higher future losses. In this study, we addressed three possibilities that may account for this behaviour: (i) hypersensitivity to reward; (ii) insensitivity to punishment; and (iii) insensitivity to future consequences, such that behaviour is always guided by immediate prospects. For this purpose, we designed a variant of the original gambling task in which the advantageous decks yielded high immediate punishment but even higher future reward. The disadvantageous decks yielded low immediate punishment but even lower future reward. We measured the skin conductance responses (SCRs) of subjects after they had received a reward or punishment. Patients with VM lesions opted for the disadvantageous decks in both the original and variant versions of the gambling task. The SCRs of VM lesion patients after they had received a reward or punishment were not significantly different from those of controls. In a second experiment, we investigated whether increasing the delayed punishment in the disadvantageous decks of the original task or decreasing the delayed reward in the disadvantageous decks of the variant task would shift the behaviour of VM lesion patients towards an advantageous strategy. Both manipulations failed to shift the behaviour of VM lesion patients away from the disadvantageous decks. These results suggest that patients with VM lesions are insensitive to future consequences, positive or negative, and are primarily guided by immediate prospects. This 'myopia for the future' in VM lesion patients persists in the face of severe adverse consequences, i.e. rising future punishment or declining future reward.",
"title": ""
},
{
"docid": "neg:1840495_19",
"text": "A highly-efficient monopulse antenna system is proposed for radar tracking system application. In this study, a novel integrated front-end and back-end complicated three-dimensional (3-D) system is realized practically to achieve high-level of self-compactness. A wideband and compact monopulse comparator network is developed and integrated as the back-end circuit in the system. Performance of the complete monopulse system is verified together with the front-end antenna array. To ensure the system's electrical efficiency and mechanical strength, a 3-D metal-direct-printing technique is utilized to fabricate the complicated structure, avoiding drawbacks from conventional machining methods and assembly processes. Experimental results show the monopulse system can achieve a bandwidth of 12.9% with VSWR less than 1.5 in the Ku-band, and isolation is better than 30 dB. More than 31.5 dBi gain can be maintained in the sum-patterns of wide bandwidth. The amplitude imbalance is less than 0.2 dB and null-depths are lower than -30 dB in the difference-patterns. In particular, with the help of the metal-printing technique, more than 90% efficiency can be retained in the monopulse system. It is a great improvement compared with that obtained from traditional machining approaches, indicating that this technique is promising for realizing high-performance RF intricate systems electrically and mechanically.",
"title": ""
}
] |
1840496 | On Organizational Becoming: Rethinking Organizational Change | [
{
"docid": "pos:1840496_0",
"text": "Building on a formal theory of the structural aspects of organizational change initiated in Hannan, Pólos, and Carroll (2002a, 2002b), this paper focuses on structural inertia. We define inertia as a persistent organizational resistance to changing architectural features. We examine the evolutionary consequences of architectural inertia. The main theorem holds that selection favors architectural inertia in the sense that the median level of inertia in cohort of organizations presumably increases over time. A second theorem holds that the selection intensity favoring architectural inertia is greater when foresight about the consequences of changes is more limited. According to the prior theory of Hannan, Pólos, and Carroll (2002a, 2002b), foresight is limited by complexity and opacity. Thus it follows that the selection intensity favoring architectural inertia is stronger in populations composed of complex and opaque organizations than in those composed of simple and transparent ones. ∗This research was supported by fellowships from the Netherlands Institute for Advanced Study and by the Stanford Graduate School of Business Trust, ERIM at Erasmus University, and the Centre for Formal Studies in the Social Sciences at Lorand Eötvös University. We benefited from the comments of Jim Baron, Dave Barron, Gábor Péli, Joel Podolny, and the participants in the workshop of the Nagymaros Group on Organizational Ecology and in the Stanford Strategy Conference. †Stanford University ‡Loránd Eötvös University, Budapest and Erasmus University, Rotterdam §Stanford University",
"title": ""
},
{
"docid": "pos:1840496_1",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
}
] | [
{
"docid": "neg:1840496_0",
"text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "neg:1840496_1",
"text": "We propose a neural network for 3D point cloud processing that exploits spherical convolution kernels and octree partitioning of space. The proposed metric-based spherical kernels systematically quantize point neighborhoods to identify local geometric structures in data, while maintaining the properties of translation-invariance and asymmetry. The network architecture itself is guided by octree data structuring that takes full advantage of the sparse nature of irregular point clouds. We specify spherical kernels with the help of neurons in each layer that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training, that enables efficient learning with high resolution point clouds. We demonstrate the utility of the spherical convolutional neural network for 3D object classification on standard benchmark datasets.",
"title": ""
},
{
"docid": "neg:1840496_2",
"text": "The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed.",
"title": ""
},
{
"docid": "neg:1840496_3",
"text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.",
"title": ""
},
{
"docid": "neg:1840496_4",
"text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"title": ""
},
{
"docid": "neg:1840496_5",
"text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.",
"title": ""
},
{
"docid": "neg:1840496_6",
"text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.",
"title": ""
},
{
"docid": "neg:1840496_7",
"text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.",
"title": ""
},
{
"docid": "neg:1840496_8",
"text": "Robust inspection is important to ensure the safety of nuclear power plant components. An automated approach would require detecting often low contrast cracks that could be surrounded by or even within textures with similar appearances such as welding, scratches and grind marks. We propose a crack detection method for nuclear power plant inspection videos by fine tuning a deep neural network for detecting local patches containing cracks which are then grouped in spatial-temporal space for group-level classification. We evaluate the proposed method on a data set consisting of 17 videos consisting of nearly 150,000 frames of inspection video and provide comparison to prior methods.",
"title": ""
},
{
"docid": "neg:1840496_9",
"text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.",
"title": ""
},
{
"docid": "neg:1840496_10",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "neg:1840496_11",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "neg:1840496_12",
"text": "We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 “big” Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications. Our code is available together with all pretrained models at: https://github. com/datquocnguyen/jPTDP.",
"title": ""
},
{
"docid": "neg:1840496_13",
"text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.",
"title": ""
},
{
"docid": "neg:1840496_14",
"text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.",
"title": ""
},
{
"docid": "neg:1840496_15",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "neg:1840496_16",
"text": "Purpose – The aim of this study was two-fold: first, to examine the noxious effects of presenteeism on employees’ work well-being in a cross-cultural context involving Chinese and British employees; second, to explore the role of supervisory support as a pan-cultural stress buffer in the presenteeism process. Design/methodology/approach – Using structured questionnaires, the authors compared data collected from samples of 245 Chinese and 128 British employees working in various organizations and industries. Findings – Cross-cultural comparison revealed that the act of presenteeism was more prevalent among Chinese and they reported higher levels of strains than their British counterparts. Hierarchical regression analyses showed that presenteeism had noxious effects on exhaustion for both Chinese and British employees. Moreover, supervisory support buffered the negative impact of presenteeism on exhaustion for both Chinese and British employees. Specifically, the negative relation between presenteeism and exhaustion was stronger for those with more supervisory support. Practical implications – Presenteeism may be used as a career-protecting or career-promoting tactic. However, the negative effects of this behavior on employees’ work well-being across the culture divide should alert us to re-think its pros and cons as a career behavior. Employees in certain cultures (e.g. the hardworking Chinese) may exhibit more presenteeism behaviour, thus are in greater risk of ill-health. Originality/value – This is the first cross-cultural study demonstrating the universality of the act of presenteeism and its damaging effects on employees’ well-being. The authors’ findings of the buffering role of supervisory support across cultural contexts highlight the necessity to incorporate resources in mitigating the harmful impact of presenteeism.",
"title": ""
},
{
"docid": "neg:1840496_17",
"text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.",
"title": ""
},
{
"docid": "neg:1840496_18",
"text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.",
"title": ""
}
] |
1840497 | The in fl uence of social media interactions on consumer – brand relationships : A three-country study of brand perceptions and marketing behaviors | [
{
"docid": "pos:1840497_0",
"text": "Traditionally, consumers used the Internet to simply expend content: they read it, they watched it, and they used it to buy products and services. Increasingly, however, consumers are utilizing platforms–—such as content sharing sites, blogs, social networking, and wikis–—to create, modify, share, and discuss Internet content. This represents the social media phenomenon, which can now significantly impact a firm’s reputation, sales, and even survival. Yet, many executives eschew or ignore this form of media because they don’t understand what it is, the various forms it can take, and how to engage with it and learn. In response, we present a framework that defines social media by using seven functional building blocks: identity, conversations, sharing, presence, relationships, reputation, and groups. As different social media activities are defined by the extent to which they focus on some or all of these blocks, we explain the implications that each block can have for how firms should engage with social media. To conclude, we present a number of recommendations regarding how firms should develop strategies for monitoring, understanding, and responding to different social media activities. final version published in Business Horizons (2011) v. 54 pp. 241-251. doi: 10.106/j.bushor.2011.01.005 1. Welcome to the jungle: The social media ecology Social media employ mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-",
"title": ""
},
{
"docid": "pos:1840497_1",
"text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.",
"title": ""
},
{
"docid": "pos:1840497_2",
"text": "This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. A literature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary. C. Bartneck ( ) Department of Industrial Design, Eindhoven University of Technology, Den Dolech 2, 5600 Eindhoven, The Netherlands e-mail: c.bartneck@tue.nl D. Kulić Nakamura & Yamane Lab, Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan e-mail: dana@ynl.t.u-tokyo.ac.jp E. Croft · S. Zoghbi Department of Mechanical Engineering, University of British Columbia, 6250 Applied Science Lane, Room 2054, Vancouver, V6T 1Z4, Canada E. Croft e-mail: ecroft@mech.ubc.ca S. Zoghbi e-mail: szoghbi@mech.ubc.ca",
"title": ""
},
{
"docid": "pos:1840497_3",
"text": "Why are certain pieces of online content more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique dataset of all the New York Times articles published over a three month period, the authors examine how emotion shapes virality. Results indicate that positive content is more viral than negative content, but that the relationship between emotion and social transmission is more complex than valence alone. Virality is driven, in part, by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low arousal, or deactivating emotions (e.g., sadness) is less viral. These results hold even controlling for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental results further demonstrate the causal impact of specific emotion on transmission, and illustrate that it is driven by the level of activation induced. Taken together, these findings shed light on why people share content and provide insight into designing effective viral marketing",
"title": ""
},
{
"docid": "pos:1840497_4",
"text": "There is an ongoing debate over the activities of brands and companies in social media. Some researchers believe social media provide a unique opportunity for brands to foster their relationships with customers, while others believe the contrary. Taking the perspective of the brand community building plus the brand trust and loyalty literatures, our goal is to show how brand communities based on social media influence elements of the customer centric model (i.e., the relationships among focal customer and brand, product, company, and other customers) and brand loyalty. A survey-based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on customer/product, customer/brand, customer/company and customer/other customers relationships, which in turn have positive effects on brand trust, and trust has positive effects on brand loyalty. We find that brand trust has a fully mediating role in converting the effects of enhanced relationships in brand community to brand loyalty. The implications for marketing practice and future research are discussed. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840497_0",
"text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.",
"title": ""
},
{
"docid": "neg:1840497_1",
"text": "Various issues make framework development harder than regular development. Building product lines and frameworks requires increased coordination and communication between stakeholders and across the organization.\n The difficulty of building the right abstractions ranges from understanding the domain models, selecting and evaluating the framework architecture, to designing the right interfaces, and adds to the complexity of a framework project.",
"title": ""
},
{
"docid": "neg:1840497_2",
"text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.",
"title": ""
},
{
"docid": "neg:1840497_3",
"text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.",
"title": ""
},
{
"docid": "neg:1840497_4",
"text": "B loyalty and the more modern topics of computing customer lifetime value and structuring loyalty programs remain the focal point for a remarkable number of research articles. At first, this research appears consistent with firm practices. However, close scrutiny reveals disaffirming evidence. Many current so-called loyalty programs appear unrelated to the cultivation of customer brand loyalty and the creation of customer assets. True investments are up-front expenditures that produce much greater future returns. In contrast, many socalled loyalty programs are shams because they produce liabilities (e.g., promises of future rewards or deferred rebates) rather than assets. These programs produce short-term revenue from customers while producing substantial future obligations to those customers. Rather than showing trust by committing to the customer, the firm asks the customer to trust the firm—that is, trust that future rewards are indeed forthcoming. The entire idea is antithetical to the concept of a customer asset. Many modern loyalty programs resemble old-fashioned trading stamps or deferred rebates that promise future benefits for current patronage. A true loyalty program invests in the customer (e.g., provides free up-front training, allows familiarization or customization) with the expectation of greater future revenue. Alternative motives for extant programs are discussed.",
"title": ""
},
{
"docid": "neg:1840497_5",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "neg:1840497_6",
"text": "In conventional power system operation, droop control methods are used to facilitate load sharing among different generation sources. This method compensates for both active and reactive power imbalances by adjusting the output voltage magnitude and frequency of the generating unit. Both P-ω and Q-V droops have been used in synchronous machines for decades. Similar droop controllers were used in this study to develop a control algorithm for a three-phase isolated (islanded) inverter. Controllers modeled in a synchronous dq reference frame were simulated in PLECS and validated with the hardware setup. A small-signal model based on an averaged model of the inverter was developed to study the system's dynamics. The accuracy of this mathematical model was then verified using the data obtained from the experimental and simulation results. This validated model is a useful tool for the further dynamic analysis of a microgrid.",
"title": ""
},
{
"docid": "neg:1840497_7",
"text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.",
"title": ""
},
{
"docid": "neg:1840497_8",
"text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.",
"title": ""
},
{
"docid": "neg:1840497_9",
"text": "In this paper performance of LQR and ANFIS control for a Double Inverted Pendulum system is compared. The double inverted pendulum system is highly unstable and nonlinear. Mathematical model is presented by linearizing the system about its vertical position. The analysis of the system is performed for its stability, controllability and observability. Furthermore, the LQR controller and ANFIS controller based on the state variable fusion is proposed for the control of the double inverted pendulum system and simulation results show that ANFIS controller has better tracking performance and disturbance rejecting performance as compared to LQR controller.",
"title": ""
},
{
"docid": "neg:1840497_10",
"text": "Visualizing the result of users' opinion mining on twitter using social network graph can play a crucial role in decision-making. Available data visualizing tools, such as NodeXL, use a specific file format as an input to construct and visualize the social network graph. One of the main components of the input file is the sentimental score of the users' opinion. This motivates us to develop a free and open source system that can take the opinion of users in raw text format and produce easy-to-interpret visualization of opinion mining and sentiment analysis result on a social network. We use a public machine learning library called LingPipe Library to classify the sentiments of users' opinion into positive, negative and neutral classes. Our proposed system can be used to analyze and visualize users' opinion on the network level to determine sub-social structures (sub-groups). Moreover, the proposed system can also identify influential people in the social network by using node level metrics such as betweenness centrality. In addition to the network level and node level analysis, our proposed method also provides an efficient filtering mechanism by either time and date, or the sentiment score. We tested our proposed system using user opinions about different Samsung products and related issues that are collected from five official twitter accounts of Samsung Company. The test results show that our proposed system will be helpful to analyze and visualize the opinion of users at both network level and node level.",
"title": ""
},
{
"docid": "neg:1840497_11",
"text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.",
"title": ""
},
{
"docid": "neg:1840497_12",
"text": "Strawberry and kiwi leathers were used to develop a new healthy and preservative-free fruit snack for new markets. Fruit puree was dehydrated at 60 °C for 20 h and subjected to accelerated storage. Soluble solids, titratable acidity, pH, water activity (aw ), total phenolic (TP), antioxidant activity (AOA) and capacity (ORAC), and color change (browning index) were measured in leathers, cooked, and fresh purees. An untrained panel was used to evaluate consumer acceptability. Soluble solids of fresh purees were 11.24 to 13.04 °Brix, whereas pH was 3.46 to 3.39. Leathers presented an aw of 0.59 to 0.67, and a moisture content of 21 kg water/100 kg. BI decreased in both leathers over accelerated storage period. TP and AOA were higher (P ≤ 0.05) in strawberry formulations. ORAC decreased 57% in strawberry and 65% in kiwi leathers when compared to fruit puree. TP and AOA increased in strawberries during storage. Strawberry and Kiwi leathers may be a feasible new, natural, high antioxidant, and healthy snack for the Chilean and other world markets, such as Europe, particularly the strawberry leather, which was preferred by untrained panelists.",
"title": ""
},
{
"docid": "neg:1840497_13",
"text": "The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set ℛ ( 3 , 3 , 3 ; 13 ) $\\mathcal {R}(3,3,3;13)$ consisting of 78,892 Ramsey colorings.",
"title": ""
},
{
"docid": "neg:1840497_14",
"text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.",
"title": ""
},
{
"docid": "neg:1840497_15",
"text": "As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues, and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, socialaware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test is introduced as well.",
"title": ""
},
{
"docid": "neg:1840497_16",
"text": "This paper derives the forward and inverse kinematics of a humanoid robot. The specific humanoid that the derivation is for is a robot with 27 degrees of freedom but the procedure can be easily applied to other similar humanoid platforms. First, the forward and inverse kinematics are derived for the arms and legs. Then, the kinematics for the torso and the head are solved. Finally, the forward and inverse kinematic solutions for the whole body are derived using the kinematics of arms, legs, torso, and head.",
"title": ""
},
{
"docid": "neg:1840497_17",
"text": "Objective\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMaterials and Methods\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nResults\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nConclusion\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering.",
"title": ""
},
{
"docid": "neg:1840497_18",
"text": "OBJECTIVES\nTo describe pelvic organ prolapse surgical success rates using a variety of definitions with differing requirements for anatomic, symptomatic, or re-treatment outcomes.\n\n\nMETHODS\nEighteen different surgical success definitions were evaluated in participants who underwent abdominal sacrocolpopexy within the Colpopexy and Urinary Reduction Efforts trial. The participants' assessments of overall improvement and rating of treatment success were compared between surgical success and failure for each of the definitions studied. The Wilcoxon rank sum test was used to identify significant differences in outcomes between success and failure.\n\n\nRESULTS\nTreatment success varied widely depending on definition used (19.2-97.2%). Approximately 71% of the participants considered their surgery \"very successful,\" and 85.2% considered themselves \"much better\" than before surgery. Definitions of success requiring all anatomic support to be proximal to the hymen had the lowest treatment success (19.2-57.6%). Approximately 94% achieved surgical success when it was defined as the absence of prolapse beyond the hymen. Subjective cure (absence of bulge symptoms) occurred in 92.1% while absence of re-treatment occurred in 97.2% of participants. Subjective cure was associated with significant improvements in the patient's assessment of both treatment success and overall improvement, more so than any other definition considered (P<.001 and <.001, respectively). Similarly, the greatest difference in symptom burden and health-related quality of life as measured by the Pelvic Organ Prolapse Distress Inventory and Pelvic Organ Prolapse Impact Questionnaire scores between treatment successes and failures was noted when success was defined as subjective cure (P<.001).\n\n\nCONCLUSION\nThe definition of success substantially affects treatment success rates after pelvic organ prolapse surgery. The absence of vaginal bulge symptoms postoperatively has a significant relationship with a patient's assessment of overall improvement, while anatomic success alone does not.\n\n\nLEVEL OF EVIDENCE\nII.",
"title": ""
},
{
"docid": "neg:1840497_19",
"text": "The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships (QSARs), among other issues, are hot topics in the field of materials science. Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations. Thus, it is imperative to develop a new method of accelerating the discovery and design process for novel materials. Recently, materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy. In this review, we first outline the typical mode of and basic procedures for applying machine learning in materials science, and we classify and compare the main algorithms. Then, the current research status is reviewed with regard to applications of machine learning in material property prediction, in new materials discovery and for other purposes. Finally, we discuss problems related to machine learning in materials science, propose possible solutions, and forecast potential directions of future research. By directly combining computational studies with experiments, we hope to provide insight into the parameters that affect the properties of materials, thereby enabling more efficient and target-oriented research on materials dis-",
"title": ""
}
] |
1840498 | Common Elements Wideband MIMO Antenna System for WiFi/LTE Access-Point Applications | [
{
"docid": "pos:1840498_0",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "pos:1840498_1",
"text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.",
"title": ""
}
] | [
{
"docid": "neg:1840498_0",
"text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.",
"title": ""
},
{
"docid": "neg:1840498_1",
"text": "In this paper, the bridgeless interleaved boost topology is proposed for plug-in hybrid electric vehicle and electric vehicle battery chargers to achieve high efficiency, which is critical to minimize the charger size, charging time and the amount and cost of electricity drawn from the utility. An analytical model for this topology is developed, enabling the calculation of power losses and efficiency. Experimental and simulation results of prototype units converting the universal AC input voltage to 400 V DC at 3.4 kW are given to verify the proof of concept, and analytical work reported in this paper.",
"title": ""
},
{
"docid": "neg:1840498_2",
"text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.",
"title": ""
},
{
"docid": "neg:1840498_3",
"text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03",
"title": ""
},
{
"docid": "neg:1840498_4",
"text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.",
"title": ""
},
{
"docid": "neg:1840498_5",
"text": "A mobile wireless sensor network owes its name to the presence of mobile sink or sensor nodes within the network. The advantages of mobile WSN over static WSN are better energy efficiency, improved coverage, enhanced target tracking and superior channel capacity. In this paper we present and discuss hierarchical multi-tiered architecture for mobile wireless sensor network. This architecture is proposed for the future pervasive computing age. We also elaborate on the impact of mobility on different performance metrics in mobile WSN. A study of some of the possible application scenarios for pervasive computing involving mobile WSN is also presented. These application scenarios will be discussed in their implementation context. While discussing the possible applications, we also study related technologies that appear promising to be integrated with mobile WSN in the ubiquitous computing. With an enormous growth in number of cellular subscribers, we therefore place the mobile phone as the key element in future ubiquitous wireless networks. With the powerful computing, communicating and storage capacities of these mobile devices, the network performance can benefit from the architecture in terms of scalability, energy efficiency and packet delay, etc.",
"title": ""
},
{
"docid": "neg:1840498_6",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
},
{
"docid": "neg:1840498_7",
"text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.",
"title": ""
},
{
"docid": "neg:1840498_8",
"text": "The sentiment mining is a fast growing topic of both academic research and commercial applications, especially with the widespread of short-text applications on the Web. A fundamental problem that confronts sentiment mining is the automatics and correctness of mined sentiment. This paper proposes an DLDA (Double Latent Dirichlet Allocation) model to analyze sentiment for short-texts based on topic model. Central to DLDA is to add sentiment to topic model and consider sentiment as equal to topic, but independent of topic. DLDA is actually two methods DLDA I and its improvement DLDA II. Compared to the single topic-word LDA, the double LDA I, i.e., DLDA I designs another sentiment-word LDA. Both LDAs are independent of each other, but they combine to influence the selected words in short-texts. DLDA II is an improvement of DLDA I. It employs entropy formula to assign weights of words in the Gibbs sampling based on the ideas that words with stronger sentiment orientation should be assigned with higher weights. Experiments show that compared with other traditional topic methods, both DLDA I and II can achieve higher accuracy with less manual needs.",
"title": ""
},
{
"docid": "neg:1840498_9",
"text": "Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.",
"title": ""
},
{
"docid": "neg:1840498_10",
"text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.",
"title": ""
},
{
"docid": "neg:1840498_11",
"text": "BACKGROUND\nTo date the manner in which information reaches the nucleus on that part within the three-dimensional structure where specific restorative processes of structural components of the cell are required is unknown. The soluble signalling molecules generated in the course of destructive and restorative processes communicate only as needed.\n\n\nHYPOTHESIS\nAll molecules show temperature-dependent molecular vibration creating a radiation in the infrared region. Each molecule species has in its turn a specific frequency pattern under given specific conditions. Changes in their structural composition result in modified frequency patterns of the molecules in question. The main structural elements of the cell membrane, of the endoplasmic reticulum, of the Golgi apparatus, and of the different microsomes representing the great variety of polar lipids show characteristic frequency patterns with peaks in the region characterised by low water absorption. These structural elements are very dynamic, mainly caused by the creation of signal molecules and transport containers. By means of the characteristic radiation, the area where repair or substitution services are needed could be identified; this spatial information complements the signalling of the soluble signal molecules. Based on their resonance properties receptors located on the outer leaflet of the nuclear envelope should be able to read typical frequencies and pass them into the nucleus. Clearly this physical signalling must be blocked by the cell membrane to obviate the flow of information into adjacent cells.\n\n\nCONCLUSION\nIf the hypothesis can be proved experimentally, it should be possible to identify and verify characteristic infrared frequency patterns. The application of these signal frequencies onto cells would open entirely new possibilities in medicine and all biological disciplines specifically to influence cell growth and metabolism. Similar to this intracellular system, an extracellular signalling system with many new therapeutic options has to be discussed.",
"title": ""
},
{
"docid": "neg:1840498_12",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "neg:1840498_13",
"text": "Flexoelectricity and the concomitant emergence of electromechanical size-effects at the nanoscale have been recently exploited to propose tantalizing concepts such as the creation of “apparently piezoelectric” materials without piezoelectric materials, e.g. graphene, emergence of “giant” piezoelectricity at the nanoscale, enhanced energy harvesting, among others. The aforementioned developments pertain primarily to hard ceramic crystals. In this work, we develop a nonlinear theoretical framework for flexoelectricity in soft materials. Using the concept of soft electret materials, we illustrate an interesting nonlinear interplay between the so-called Maxwell stress effect and flexoelectricity, and propose the design of a novel class of apparently piezoelectric materials whose constituents are intrinsically non-piezoelectric. In particular, we show that the electret-Maxwell stress based mechanism can be combined with flexoelectricity to achieve unprecedentedly high values of electromechanical coupling. Flexoelectricity is also important for a special class of soft materials: biological membranes. In this context, flexoelectricity manifests itself as the development of polarization upon changes in curvature. Flexoelectricity is found to be important in a number of biological functions including hearing, ion transport and in some situations where mechanotransduction is necessary. In this work, we present a simple linearized theory of flexoelectricity in biological membranes and some illustrative examples. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840498_14",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "neg:1840498_15",
"text": "When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralization of their speech. In this work-in-progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.",
"title": ""
},
{
"docid": "neg:1840498_16",
"text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.",
"title": ""
},
{
"docid": "neg:1840498_17",
"text": "This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.",
"title": ""
},
{
"docid": "neg:1840498_18",
"text": "Ultrasound image quality is related to the receive beamformer’s ability. Delay and sum (DAS), a conventional beamformer, is combined with the coherence factor (CF) technique to suppress side lobe levels. The purpose of this study is to improve these beamformer’s abilities. It has been shown that extension of the receive aperture can improve the receive beamformer’s ability in radar studies. This paper shows that the focusing quality of CF and CF+DAS in medical ultrasound can be increased by extension of the receive aperture’s length in phased synthetic aperture (PSA) imaging. The 3-dB width of the main lobe in the receive beam related to CF focusing decreased to 0.55 mm using the proposed PSA compared to the conventional phased array (PHA) imaging, whose FWHM is about 0.9 mm. The clutter-to-total-energy ratio (CTR) represented by R20 dB showed an improvement of 50 and 33% for CF and CF+DAS beamformers, respectively, with PSA as compared to PHA. In addition, simulation results validated the effectiveness of PSA versus PHA. In applications where there are no important limitations on the SNR, PSA imaging is recommended as it increases the ability of the receive beamformer for better focusing.",
"title": ""
}
] |
1840499 | Ontologies in Ubiquitous Computing | [
{
"docid": "pos:1840499_0",
"text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is",
"title": ""
}
] | [
{
"docid": "neg:1840499_0",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "neg:1840499_1",
"text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.",
"title": ""
},
{
"docid": "neg:1840499_2",
"text": "This paper reviews the use of socially interactive robots to assist in the therapy of children with autism. The extent to which the robots were successful in helping the children in their social, emotional, and communication deficits was investigated. Child-robot interactions were scrutinized with respect to the different target behaviours that are to be elicited from a child during therapy. These behaviours were thoroughly examined with respect to a child's development needs. Most importantly, experimental data from the surveyed works were extracted and analyzed in terms of the target behaviours and how each robot was used during a therapy session to achieve these behaviours. The study concludes by categorizing the different therapeutic roles that these robots were observed to play, and highlights the important design features that enable them to achieve high levels of effectiveness in autism therapy.",
"title": ""
},
{
"docid": "neg:1840499_3",
"text": "In many randomised trials researchers measure a continuous variable at baseline and again as an outcome assessed at follow up. Baseline measurements are common in trials of chronic conditions where researchers want to see whether a treatment can reduce pre-existing levels of pain, anxiety, hypertension, and the like. Statistical comparisons in such trials can be made in several ways. Comparison of follow up (posttreatment) scores will give a result such as “at the end of the trial, mean pain scores were 15 mm (95% confidence interval 10 to 20 mm) lower in the treatment group.” Alternatively a change score can be calculated by subtracting the follow up score from the baseline score, leading to a statement such as “pain reductions were 20 mm (16 to 24 mm) greater on treatment than control.” If the average baseline scores are the same in each group the estimated treatment effect will be the same using these two simple approaches. If the treatment is effective the statistical significance of the treatment effect by the two methods will depend on the correlation between baseline and follow up scores. If the correlation is low using the change score will add variation and the follow up score is more likely to show a significant result. Conversely, if the correlation is high using only the follow up score will lose information and the change score is more likely to be significant. It is incorrect, however, to choose whichever analysis gives a more significant finding. The method of analysis should be specified in the trial protocol. Some use change scores to take account of chance imbalances at baseline between the treatment groups. However, analysing change does not control for baseline imbalance because of regression to the mean : baseline values are negatively correlated with change because patients with low scores at baseline generally improve more than those with high scores. A better approach is to use analysis of covariance (ANCOVA), which, despite its name, is a regression method. In effect two parallel straight lines (linear regression) are obtained relating outcome score to baseline score in each group. They can be summarised as a single regression equation: follow up score = constant + a◊baseline score + b◊group where a and b are estimated coefficients and group is a binary variable coded 1 for treatment and 0 for control. The coefficient b is the effect of interest—the estimated difference between the two treatment groups. In effect an analysis of covariance adjusts each patient’s follow up score for his or her baseline score, but has the advantage of being unaffected by baseline differences. If, by chance, baseline scores are worse in the treatment group, the treatment effect will be underestimated by a follow up score analysis and overestimated by looking at change scores (because of regression to the mean). By contrast, analysis of covariance gives the same answer whether or not there is baseline imbalance. As an illustration, Kleinhenz et al randomised 52 patients with shoulder pain to either true or sham acupuncture. Patients were assessed before and after treatment using a 100 point rating scale of pain and function, with lower scores indicating poorer outcome. There was an imbalance between groups at baseline, with better scores in the acupuncture group (see table). Analysis of post-treatment scores is therefore biased. The authors analysed change scores, but as baseline and change scores are negatively correlated (about r = − 0.25 within groups) this analysis underestimates the effect of acupuncture. From analysis of covariance we get: follow up score = 24 + 0.71◊baseline score + 12.7◊group (see figure). The coefficient for group (b) has a useful interpretation: it is the difference between the mean change scores of each group. In the above example it can be interpreted as “pain and function score improved by an estimated 12.7 points more on average in the treatment group than in the control group.” A 95% confidence interval and P value can also be calculated for b (see table). The regression equation provides a means of prediction: a patient with a baseline score of 50, for example, would be predicted to have a follow up score of 72.2 on treatment and 59.5 on control. An additional advantage of analysis of covariance is that it generally has greater statistical power to detect a treatment effect than the other methods. For example, a trial with a correlation between baseline and follow",
"title": ""
},
{
"docid": "neg:1840499_4",
"text": "-Finger print recognition is more popular attending system mostly used in many offices as it provides more accuracy. Machinery also system software based finger print recognition systems are mostly used. But its real time monitoring and remote intimation is not performed until now if wrong person is entering. Instant reporting to officer is necessary for maintaining absence/presence of staff members. This automatic reporting is necessary as officer may be remotely available. So, fingerprint identification based attendance system is proposed with real time remote monitoring. Proposed system requires Finger print sensor, data acquisition system for it, Processor (ARM 11), Ethernet/Wi-Fi Interface for Internet access and Smart phone for monitoring. WhatsApp is generally used by most of peoples and is easily accessible to all so generally preferred in this work. ARM 11 is necessary as it requires the Internet connection for What’ s App data transfer.",
"title": ""
},
{
"docid": "neg:1840499_5",
"text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.",
"title": ""
},
{
"docid": "neg:1840499_6",
"text": "This paper presents a comprehensive overview of the literature on the types, effects, conditions and user of Open 6 Government Data (OGD). The review analyses 101 academic studies about OGD which discuss at least one of the four factors 7 of OGD utilization: the different types of utilization, the effects of utilization, the key conditions, and the different users. Our 8 analysis shows that the majority of studies focus on the OGD provisions while assuming, but not empirically testing, various 9 forms of utilization. The paper synthesizes the hypothesized relations in a multi-dimensional framework of OGD utilization. 10 Based on the framework we suggest four future directions for research: 1) investigate the link between type of utilization and 11 type of users (e.g. journalists, citizens) 2) investigate the link between type of user and type of effect (e.g. societal, economic and 12 good governance benefits) 3) investigate the conditions that moderate OGD effects (e.g. policy, data quality) and 4) establishing 13 a causal link between utilization and OGD outcomes. 14",
"title": ""
},
{
"docid": "neg:1840499_7",
"text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.",
"title": ""
},
{
"docid": "neg:1840499_8",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "neg:1840499_9",
"text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.",
"title": ""
},
{
"docid": "neg:1840499_10",
"text": "In this work, a portable real-time wireless health monitoring system is developed. The system is used for remote monitoring of patients' heart rate and oxygen saturation in blood. The system was designed and implemented using ZigBee wireless technologies. All pulse oximetry data are transferred within a group of wireless personal area network (WPAN) to database computer server. The sensor modules were designed for low power operation with a program that can adjust power management depending on scenarios of power source and current power operation. The sensor unit consists of (1) two types of LEDs and photodiode packed in Velcro strip that is facing to a patient's fingertip; (2) Microcontroller unit for interfacing with ZigBee module, processing pulse oximetry data and storing some data before sending to base PC; (3) ZigBee module for communicating the data of pulse oximetry, ZigBee module gets all commands from microcontroller unit and it has a complete ZigBee stack inside and (4) Base node for receiving and storing the data before sending to PC.",
"title": ""
},
{
"docid": "neg:1840499_11",
"text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.",
"title": ""
},
{
"docid": "neg:1840499_12",
"text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.",
"title": ""
},
{
"docid": "neg:1840499_13",
"text": "OBJECTIVE\nTo determine the probable factors responsible for stress among undergraduate medical students.\n\n\nMETHODS\nThe qualitative descriptive study was conducted at a public-sector medical college in Islamabad, Pakistan, from January to April 2014. Self-administered open-ended questionnaires were used to collect data from first year medical students in order to study the factors associated with the new environment.\n\n\nRESULTS\nThere were 115 students in the study with a mean age of 19±6.76 years. Overall, 35(30.4%) students had mild to moderate physical problems, 20(17.4%) had severe physical problems and 60(52.2%) did not have any physical problem. Average stress score was 19.6±6.76. Major elements responsible for stress identified were environmental factors, new college environment, student abuse, tough study routines and personal factors.\n\n\nCONCLUSIONS\nMajority of undergraduate students experienced stress due to both academic and emotional factors.",
"title": ""
},
{
"docid": "neg:1840499_14",
"text": "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.",
"title": ""
},
{
"docid": "neg:1840499_15",
"text": "Here we discussed different dielectric substrate frequently used in microstrip patch antenna to enhance overall efficiency of antenna. Various substrates like foam, duroid, benzocyclobutane, roger 4350, epoxy, FR4, Duroid 6010 are in use to achieve better gain and bandwidth. A dielectric substrate is a insulator which is a main constituent of the microstrip structure, where a thicker substrate is considered because it has direct proportionality with bandwidth whereas dielectric constant is inversely proportional to bandwidth as lower the relative permittivity better the fringing is achieved. Another factor that impact directly is loss tangent it shows inverse relation with efficiency the dilemma is here is that substrate with lower loss tangent is costlier. A clear pros and cons are discussed here of different substrates for judicious selection. A substrate gives mechanical strength to the antenna.",
"title": ""
},
{
"docid": "neg:1840499_16",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "neg:1840499_17",
"text": "In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method.",
"title": ""
},
{
"docid": "neg:1840499_18",
"text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.",
"title": ""
}
] |
1840500 | Energy Harvesting Using a Low-Cost Rectenna for Internet of Things (IoT) Applications | [
{
"docid": "pos:1840500_0",
"text": "The idea of wireless power transfer (WPT) has been around since the inception of electricity. In the late 19th century, Nikola Tesla described the freedom to transfer energy between two points without the need for a physical connection to a power source as an \"all-surpassing importance to man\". A truly wireless device, capable of being remotely powered, not only allows the obvious freedom of movement but also enables devices to be more compact by removing the necessity of a large battery. Applications could leverage this reduction in size and weight to increase the feasibility of concepts such as paper-thin, flexible displays, contact-lens-based augmented reality, and smart dust, among traditional point-to-point power transfer applications. While several methods of wireless power have been introduced since Tesla's work, including near-field magnetic resonance and inductive coupling, laser-based optical power transmission, and far-field RF/microwave energy transmission, only RF/microwave and laser-based systems are truly long-range methods. While optical power transmission certainly has merit, its mechanisms are outside of the scope of this article and will not be discussed.",
"title": ""
},
{
"docid": "pos:1840500_1",
"text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.",
"title": ""
}
] | [
{
"docid": "neg:1840500_0",
"text": "We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrasebased statistical machine translation.",
"title": ""
},
{
"docid": "neg:1840500_1",
"text": "Graphs are one of the key data structures for many real-world computing applications and the importance of graph analytics is ever-growing. While existing software graph processing frameworks improve programmability of graph analytics, underlying general purpose processors still limit the performance and energy efficiency of graph analytics. We architect a domain-specific accelerator, Graphicionado, for high-performance, energy-efficient processing of graph analytics workloads. For efficient graph analytics processing, Graphicionado exploits not only data structure-centric datapath specialization, but also memory subsystem specialization, all the while taking advantage of the parallelism inherent in this domain. Graphicionado augments the vertex programming paradigm, allowing different graph analytics applications to be mapped to the same accelerator framework, while maintaining flexibility through a small set of reconfigurable blocks. This paper describes Graphicionado pipeline design choices in detail and gives insights on how Graphicionado combats application execution inefficiencies on general-purpose CPUs. Our results show that Graphicionado achieves a 1.76-6.54x speedup while consuming 50-100x less energy compared to a state-of-the-art software graph analytics processing framework executing 32 threads on a 16-core Haswell Xeon processor.",
"title": ""
},
{
"docid": "neg:1840500_2",
"text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.",
"title": ""
},
{
"docid": "neg:1840500_3",
"text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.",
"title": ""
},
{
"docid": "neg:1840500_4",
"text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.",
"title": ""
},
{
"docid": "neg:1840500_5",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "neg:1840500_6",
"text": "Today, by integrating Near Field Communication (NFC) technology in smartphones, bank cards and payment terminals, a purchase transaction can be executed immediately without any physical contact, without entering a PIN code or a signature. Europay Mastercard Visa (EMV) is the standard dedicated for securing contactless-NFC payment transactions. However, it does not ensure two main security proprieties: (1) the authentication of the payment terminal to the client's payment device, (2) the confidentiality of personal banking data. In this paper, we first of all detail EMV standard and its security vulnerabilities. Then, we propose a solution that enhances the EMV protocol by adding a new security layer aiming to solve EMV weaknesses. We formally check the correctness of the proposal using a security verification tool called Scyther.",
"title": ""
},
{
"docid": "neg:1840500_7",
"text": "CONCRETE",
"title": ""
},
{
"docid": "neg:1840500_8",
"text": "Research has consistently found that school students who do not identify as self-declared completely heterosexual are at increased risk of victimization by bullying from peers. This study examined heterosexual and nonheterosexual university students' involvement in both traditional and cyber forms of bullying, as either bullies or victims. Five hundred twenty-eight first-year university students (M=19.52 years old) were surveyed about their sexual orientation and their bullying experiences over the previous 12 months. The results showed that nonheterosexual young people reported higher levels of involvement in traditional bullying, both as victims and perpetrators, in comparison to heterosexual students. In contrast, cyberbullying trends were generally found to be similar for heterosexual and nonheterosexual young people. Gender differences were also found. The implications of these results are discussed in terms of intervention and prevention of the victimization of nonheterosexual university students.",
"title": ""
},
{
"docid": "neg:1840500_9",
"text": "To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider \"quantified self\" movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token \"mcdonalds\" or the category \"dessert\" being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the \"quick added calories\" functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries.",
"title": ""
},
{
"docid": "neg:1840500_10",
"text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-",
"title": ""
},
{
"docid": "neg:1840500_11",
"text": "Through AspectJ, aspect-oriented programming (AOP) is becoming of increasing interest and availability to Java programmers as it matures as a methodology for improved software modularity via the separation of cross-cutting concerns. AOP proponents often advocate a development strategy where Java programmers write the main application, ignoring cross-cutting concerns, and then AspectJ programmers, domain experts in their specific concerns, weave in the logic for these more specialized cross-cutting concerns. However, several authors have recently debated the merits of this strategy by empirically showing certain drawbacks. The proposed solutions paint a different development strategy where base code and aspect programmers are aware of each other (to varying degrees) and interactions between cross-cutting concerns are planned for early on.\n Herein we explore new possibilities in the language design space that open up when the base code is aware of cross-cutting aspects. Using our insights from this exploration we concretize these new possibilities by extending AspectJ with concise yet powerful constructs, while maintaining full backwards compatibility. These new constructs allow base code and aspects to cooperate in ways that were previously not possible: arbitrary blocks of code can be advised, advice can be explicitly parameterized, base code can guide aspects in where to apply advice, and aspects can statically enforce new constraints upon the base code that they advise. These new techniques allow aspect modularity and program safety to increase. We illustrate the value of our extensions through an example based on transactions.",
"title": ""
},
{
"docid": "neg:1840500_12",
"text": "This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored. Moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. In most settings, the new algorithms proposed clearly outperform the existing ones.",
"title": ""
},
{
"docid": "neg:1840500_13",
"text": "The dual-polarized corporate-feed waveguide slot array antenna is designed for the 60 GHz band. Using the multi-layer structure, we have realized dual-polarization operation. Even though the gain is approximately 1 dB lower than the antenna for the single polarization due to the -15dB cross-polarization level in 8=58°, this antenna still shows very high gain over 32 dBi over the broad bandwidth. This antenna will be fabricated and measured in future.",
"title": ""
},
{
"docid": "neg:1840500_14",
"text": "We have constructed a wave-front sensor to measure the irregular as well as the classical aberrations of the eye, providing a more complete description of the eye's aberrations than has previously been possible. We show that the wave-front sensor provides repeatable and accurate measurements of the eye's wave aberration. The modulation transfer function of the eye computed from the wave-front sensor is in fair, though not complete, agreement with that obtained under similar conditions on the same observers by use of the double-pass and the interferometric techniques. Irregular aberrations, i.e., those beyond defocus, astigmatism, coma, and spherical aberration, do not have a large effect on retinal image quality in normal eyes when the pupil is small (3 mm). However, they play a substantial role when the pupil is large (7.3-mm), reducing visual performance and the resolution of images of the living retina. Although the pattern of aberrations varies from subject to subject, aberrations, including irregular ones, are correlated in left and right eyes of the same subject, indicating that they are not random defects.",
"title": ""
},
{
"docid": "neg:1840500_15",
"text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.",
"title": ""
},
{
"docid": "neg:1840500_16",
"text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using",
"title": ""
},
{
"docid": "neg:1840500_17",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "neg:1840500_18",
"text": "Temporal-di erence (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at di erent levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of xed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). 1 Multi-Scale Planning and Modeling Model-based reinforcement learning o ers a potentially elegant solution to the problem of integrating planning into a real-time learning and decisionmaking agent (Sutton, 1990; Barto et al., 1995; Peng & Williams, 1993, Moore & Atkeson, 1994; Dean et al., in prep). However, most current reinforcementlearning systems assume a single, xed time step: actions take one step to complete, and their immediate consequences become available after one step. This makes it di cult to learn and plan at di erent time scales. For example, commuting to work involves planning at a high level about which route to drive (or whether to take the train) and at a low level about how to steer, when to brake, etc. Planning is necessary at both levels in order to optimize precise low-level movements without becoming lost in a sea of detail when making decisions at a high level. Moreover, these levels cannot be kept totally distinct and separate. They must interrelate at least in the sense that the actions and plans at a high levels must be turned into actual, moment-by-moment decisions at the lowest level. The need for hierarchical and abstract planning is a fundamental problem in AI whether or not one uses the reinforcement-learning framework (e.g., Fikes et al., 1972; Sacerdoti, 1977; Kuipers, 1979; Laird et al., 1986; Korf, 1985; Minton, 1988; Watkins, 1989; Drescher, 1991; Ring, 1991; Wixson, 1991; Schmidhuber, 1991; Tenenberg et al., 1992; Kaelbling, 1993; Lin, 1993; Dayan & Hinton, 1993; Dejong, 1994; Chrisman, 1994; Hansen, 1994; Dean & Lin, in prep). We do not propose to fully solve it in this paper. Rather, we develop an approach to multiple-time-scale modeling of the world that may eventually be useful in such a solution. Our approach is to extend temporal-di erence (TD) methods, which are commonly used in reinforcement learning systems to learn value functions, such that they can be used to learn world models. When TD methods are used, the predictions of the models can naturally extend beyond a single time step. As we will show, they can even make predictions that are not speci c to a single time scale, but intermix many such scales, with no loss of performance when the models are used. This approach is an extension of the ideas of Singh (1992), Dayan (1993), and Sutton & Pinette",
"title": ""
},
{
"docid": "neg:1840500_19",
"text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.",
"title": ""
}
] |
1840501 | A Measure for Objective Evaluation of Image Segmentation Algorithms | [
{
"docid": "pos:1840501_0",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
}
] | [
{
"docid": "neg:1840501_0",
"text": "We introduce an iterative normalization and clustering method for single-cell gene expression data. The emerging technology of single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is confounded by technical variation emanating from experimental errors and cell type-specific biases. Current approaches perform a global normalization prior to analyzing biological signals, which does not resolve missing data or variation dependent on latent cell types. Our model is formulated as a hierarchical Bayesian mixture model with cell-specific scalings that aid the iterative normalization and clustering of cells, teasing apart technical variation from biological signals. We demonstrate that this approach is superior to global normalization followed by clustering. We show identifiability and weak convergence guarantees of our method and present a scalable Gibbs inference algorithm. This method improves cluster inference in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.",
"title": ""
},
{
"docid": "neg:1840501_1",
"text": "We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.",
"title": ""
},
{
"docid": "neg:1840501_2",
"text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.",
"title": ""
},
{
"docid": "neg:1840501_3",
"text": "Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.",
"title": ""
},
{
"docid": "neg:1840501_4",
"text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.",
"title": ""
},
{
"docid": "neg:1840501_5",
"text": "A great deal of research exists on the neural basis of theory-of-mind (ToM) or mentalizing. Qualitative reviews on this topic have identified a mentalizing network composed of the medial prefrontal cortex, posterior cingulate/precuneus, and bilateral temporal parietal junction. These conclusions, however, are not based on a quantitative and systematic approach. The current review presents a quantitative meta-analysis of neuroimaging studies pertaining to ToM, using the activation-likelihood estimation (ALE) approach. Separate ALE meta-analyses are presented for story-based and nonstory-based studies of ToM. The conjunction of these two meta-analyses reveals a core mentalizing network that includes areas not typically noted by previous reviews. A third ALE meta-analysis was conducted with respect to story comprehension in order to examine the relation between ToM and stories. Story processing overlapped with many regions of the core mentalizing network, and these shared regions bear some resemblance to a network implicated by a number of other processes.",
"title": ""
},
{
"docid": "neg:1840501_6",
"text": "OBJECTIVE\nTo statistically analyze the long-term results of alar base reduction after rhinoplasty.\n\n\nMETHODS\nAmong a consecutive series of 100 rhinoplasty cases, 19 patients required alar base reduction. The mean (SD) follow-up time was 11 (9) months (range, 2 months to 3 years). Using preoperative and postoperative photographs, comparisons were made of the change in the base width (width of base between left and right alar-facial junctions), flare width (width on base view between points of widest alar flare), base height (distance from base to nasal tip on base view), nostril height (distance from base to anterior edge of nostril), and vertical flare (vertical distance from base to the widest alar flare). Notching at the nasal sill was recorded as none, minimal, mild, moderate, and severe.\n\n\nRESULTS\nChanges in vertical flare (P<.05) and nostril height (P<.05) were the only significant differences seen in the patients who required alar reduction. No significant change was seen in base width (P=.92), flare width (P=.41), or base height (P=.22). No notching was noted.\n\n\nCONCLUSIONS\nIt would have been preferable to study patients undergoing alar reduction without concomitant rhinoplasty procedures, but this approach is not practical. To our knowledge, the present study represents the most extensive attempt in the literature to characterize and quantify the postoperative effects of alar base reduction.",
"title": ""
},
{
"docid": "neg:1840501_7",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "neg:1840501_8",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "neg:1840501_9",
"text": "We propose Smells Phishy?, a board game that contributes to raising users' awareness of online phishing scams. We designed and developed the board game and conducted user testing with 21 participants. The results showed that after playing the game, participants had better understanding of phishing scams and learnt how to better protect themselves. Participants enjoyed playing the game and said that it was a fun and exciting experience. The game increased knowledge and awareness, and encouraged discussion.",
"title": ""
},
{
"docid": "neg:1840501_10",
"text": "The debate continues around transconjunctival versus transcutaneous approaches. Despite the perceived safety of the former, many experienced surgeons continue to advocate the latter. This review aims to present a balanced view of each approach. It will first address the anatomic basis of lower lid aging and then organize recent literature and associated discussion into the transconjunctival and transcutaneous approaches. The integrated algorithm employed by the senior author will be presented. Finally this review will describe less mainstream suture techniques for lower lid rejuvenation and lower lid blepharoplasty complications with a focus upon lower lid malposition.",
"title": ""
},
{
"docid": "neg:1840501_11",
"text": "Identifying the language used will typically be the first step in most natural language processing tasks. Among the wide variety of language identification methods discussed in the literature, the ones employing the Cavnar and Trenkle (1994) approach to text categorization based on character n-gram frequencies have been particularly successful. This paper presents the R extension package textcat for n-gram based text categorization which implements both the Cavnar and Trenkle approach as well as a reduced n-gram approach designed to remove redundancies of the original approach. A multi-lingual corpus obtained from the Wikipedia pages available on a selection of topics is used to illustrate the functionality of the package and the performance of the provided language identification methods.",
"title": ""
},
{
"docid": "neg:1840501_12",
"text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.",
"title": ""
},
{
"docid": "neg:1840501_13",
"text": "UNLABELLED\nAcarbose is an α-glucosidase inhibitor produced by Actinoplanes sp. SE50/110 that is medically important due to its application in the treatment of type2 diabetes. In this work, a comprehensive proteome analysis of Actinoplanes sp. SE50/110 was carried out to determine the location of proteins of the acarbose (acb) and the putative pyochelin (pch) biosynthesis gene cluster. Therefore, a comprehensive state-of-the-art proteomics approach combining subcellular fractionation, shotgun proteomics and spectral counting to assess the relative abundance of proteins within fractions was applied. The analysis of four different proteome fractions (cytosolic, enriched membrane, membrane shaving and extracellular fraction) resulted in the identification of 1582 of the 8270 predicted proteins. All 22 Acb-proteins and 21 of the 23 Pch-proteins were detected. Predicted membrane-associated, integral membrane or extracellular proteins of the pch and the acb gene cluster were found among the most abundant proteins in corresponding fractions. Intracellular biosynthetic proteins of both gene clusters were not only detected in the cytosolic, but also in the enriched membrane fraction, indicating that the biosynthesis of acarbose and putative pyochelin metabolites takes place at the inner membrane.\n\n\nBIOLOGICAL SIGNIFICANCE\nActinoplanes sp. SE50/110 is a natural producer of the α-glucosidase inhibitor acarbose, a bacterial secondary metabolite that is used as a drug for the treatment of type 2 diabetes, a disease which is a global pandemic that currently affects 387 million people and accounts for 11% of worldwide healthcare expenditures (www.idf.org). The work presented here is the first comprehensive investigation of protein localization and abundance in Actinoplanes sp. SE50/110 and provides an extensive source of information for the selection of genes for future mutational analysis and other hypothesis driven experiments. The conclusion that acarbose or pyochelin family siderophores are synthesized at the inner side of the cytoplasmic membrane determined from this work, indicates that studying corresponding intermediates will be challenging. In addition to previous studies on the genome and transcriptome, the work presented here demonstrates that the next omic level, the proteome, is now accessible for detailed physiological analysis of Actinoplanes sp. SE50/110, as well as mutants derived from this and related species.",
"title": ""
},
{
"docid": "neg:1840501_14",
"text": "Empirical studies largely support the continuity hypothesis of dreaming. Despite of previous research efforts, the exact formulation of the continuity hypothesis remains vague. The present paper focuses on two aspects: (1) the differential incorporation rate of different waking-life activities and (2) the magnitude of which interindividual differences in waking-life activities are reflected in corresponding differences in dream content. Using a correlational design, a positive, non-zero correlation coefficient will support the continuity hypothesis. Although many researchers stress the importance of emotional involvement on the incorporation rate of waking-life experiences into dreams, formulated the hypothesis that highly focused cognitive processes such as reading, writing, etc. are rarely found in dreams due to the cholinergic activation of the brain during dreaming. The present findings based on dream diaries and the exact measurement of waking activities replicated two recent questionnaire studies. These findings indicate that it will be necessary to specify the continuity hypothesis more fully and include factors (e.g., type of waking-life experience, emotional involvement) which modulate the incorporation rate of waking-life experiences into dreams. Whether the cholinergic state of the brain during REM sleep or other alterations of brain physiology (e.g., down-regulation of the dorsolateral prefrontal cortex) are the underlying factors of the rare occurrence of highly focused cognitive processes in dreaming remains an open question. Although continuity between waking life and dreaming has been demonstrated, i.e., interindividual differences in the amount of time spent with specific waking-life activities are reflected in dream content, methodological issues (averaging over a two-week period, small number of dreams) have limited the capacity for detecting substantial relationships in all areas. Nevertheless, it might be concluded that the continuity hypothesis in its present general form is not valid and should be elaborated and tested in a more specific way.",
"title": ""
},
{
"docid": "neg:1840501_15",
"text": "The human gut is populated with as many as 100 trillion cells, whose collective genome, the microbiome, is a reflection of evolutionary selection pressures acting at the level of the host and at the level of the microbial cell. The ecological rules that govern the shape of microbial diversity in the gut apply to mutualists and pathogens alike.",
"title": ""
},
{
"docid": "neg:1840501_16",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "neg:1840501_17",
"text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography",
"title": ""
},
{
"docid": "neg:1840501_18",
"text": "0950-7051/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.knosys.2011.07.017 ⇑ Corresponding author at: Shenzhen Institutes of A Academy of Sciences, Shenzhen 518055, China. E-mail addresses: zy.zhao@siat.ac.cn, zy.zhao10@ @siat.ac.cn (S. Feng), qiang.wang1@siat.ac.cn (Q. (J.Z. Huang), Graham.Williams@togaware.com (G.J. W Fan) . Community detection is an important issue in social network analysis. Most existing methods detect communities through analyzing the linkage of the network. The drawback is that each community identified by those methods can only reflect the strength of connections, but it cannot reflect the semantics such as the interesting topics shared by people. To address this problem, we propose a topic oriented community detection approach which combines both social objects clustering and link analysis. We first use a subspace clustering algorithm to group all the social objects into topics. Then we divide the members that are involved in those social objects into topical clusters, each corresponding to a distinct topic. In order to differentiate the strength of connections, we perform a link analysis on each topical cluster to detect the topical communities. Experiments on real data sets have shown that our approach was able to identify more meaningful communities. The quantitative evaluation indicated that our approach can achieve a better performance when the topics are at least as important as the links to the analysis. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840501_19",
"text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.",
"title": ""
}
] |
1840502 | Achieving Flexible and Self-Contained Data Protection in Cloud Computing | [
{
"docid": "pos:1840502_0",
"text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.",
"title": ""
}
] | [
{
"docid": "neg:1840502_0",
"text": "Process mining can be seen as the “missing link” between data mining and business process management. The lion0s share of process mining research has been devoted to the discovery of procedural process models from event logs. However, often there are predefined constraints that (partially) describe the normative or expected process, e.g., “activity A should be followed by B” or “activities A and B should never be both executed”. A collection of such constraints is called a declarative process model. Although it is possible to discover such models based on event data, this paper focuses on aligning event logs and predefined declarative process models. Discrepancies between log and model are mediated such that observed log traces are related to paths in the model. The resulting alignments provide sophisticated diagnostics that pinpoint where deviations occur and how severe they are. Moreover, selected parts of the declarative process model can be used to clean and repair the event log before applying other process mining techniques. Our alignment-based approach for preprocessing and conformance checking using declarative process models has been implemented in ProM and has been evaluated using both synthetic logs and real-life logs from a Dutch hospital. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840502_1",
"text": "Quadratic differentials naturally define analytic orientation fields on planar surfaces. We propose to model orientation fields of fingerprints by specifying quadratic differentials. Models for all fingerprint classes such as arches, loops and whorls are laid out. These models are parametrised by few, geometrically interpretable parameters which are invariant under Euclidean motions. We demonstrate their ability in adapting to given, observed orientation fields, and we compare them to existing models using the fingerprint images of the NIST Special Database 4. We also illustrate that these model allow for extrapolation into unobserved regions. This goes beyond the scope of earlier models for the orientation field as those are restricted to the observed planar fingerprint region. Within the framework of quadratic differentials we are able to verify analytically Penrose's formula for the singularities on a palm [L. S. Penrose, \"Dermatoglyphics\"' Scientific American, vol. 221, no.~6, pp. 73--84, 1969]. Potential applications of these models are the use of their parameters as indices of large fingerprint databases, as well as the definition of intrinsic coordinates for single fingerprint images.",
"title": ""
},
{
"docid": "neg:1840502_2",
"text": "Fanconi anemia (FA) is a recessively inherited disease characterized by multiple symptoms including growth retardation, skeletal abnormalities, and bone marrow failure. The FA diagnosis is complicated due to the fact that the clinical manifestations are both diverse and variable. A chromosomal breakage test using a DNA cross-linking agent, in which cells from an FA patient typically exhibit an extraordinarily sensitive response, has been considered the gold standard for the ultimate diagnosis of FA. In the majority of FA patients the test results are unambiguous, although in some cases the presence of hematopoietic mosaicism may complicate interpretation of the data. However, some diagnostic overlap with other syndromes has previously been noted in cases with Nijmegen breakage syndrome. Here we present results showing that misdiagnosis may also occur with patients suffering from two of the three currently known cohesinopathies, that is, Roberts syndrome (RBS) and Warsaw breakage syndrome (WABS). This complication may be avoided by scoring metaphase chromosomes-in addition to chromosomal breakage-for spontaneously occurring premature centromere division, which is characteristic for RBS and WABS, but not for FA.",
"title": ""
},
{
"docid": "neg:1840502_3",
"text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.",
"title": ""
},
{
"docid": "neg:1840502_4",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "neg:1840502_5",
"text": "Expanding view of minimal invasive surgery horizon reveals new practice areas for surgeons and patients. Laparoscopic inguinal hernia repair is an example in progress wondered by many patients and surgeons. Advantages in laparoscopic repair motivate surgeons to discover this popular field. In addition, patients search the most convenient surgical method for themselves today. Laparoscopic approaches to inguinal hernia surgery have become popular as a result of the development of experience about different laparoscopic interventions, and these techniques are increasingly used these days. As other laparoscopic surgical methods, experience is the most important point in order to obtain good results. This chapter aims to show technical details, pitfalls and the literature results about two methods that are commonly used in laparoscopic inguinal hernia repair.",
"title": ""
},
{
"docid": "neg:1840502_6",
"text": "Almost all automatic semantic role labeling (SRL) systems rely on a preliminary parsing step that derives a syntactic structure from the sentence being analyzed. This makes the choice of syntactic representation an essential design decision. In this paper, we study the influence of syntactic representation on the performance of SRL systems. Specifically, we compare constituent-based and dependencybased representations for SRL of English in the FrameNet paradigm. Contrary to previous claims, our results demonstrate that the systems based on dependencies perform roughly as well as those based on constituents: For the argument classification task, dependencybased systems perform slightly higher on average, while the opposite holds for the argument identification task. This is remarkable because dependency parsers are still in their infancy while constituent parsing is more mature. Furthermore, the results show that dependency-based semantic role classifiers rely less on lexicalized features, which makes them more robust to domain changes and makes them learn more efficiently with respect to the amount of training data.",
"title": ""
},
{
"docid": "neg:1840502_7",
"text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.",
"title": ""
},
{
"docid": "neg:1840502_8",
"text": "In this paper we propose a recognition system of medical concepts from free text clinical reports. Our approach tries to recognize also concepts which are named with local terminology, with medical writing scripts, short words, abbreviations and even spelling mistakes. We consider a clinical terminology ontology (Snomed-CT), as a dictionary of concepts. In a first step we obtain an embedding model using word2vec methodology from a big corpus database of clinical reports. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space, and so the geometrical similarity can be considered a measure of semantic relation. We have considered 615513 emergency clinical reports from the Hospital \"Rafael Méndez\" in Lorca, Murcia. In these reports there are a lot of local language of the emergency domain, medical writing scripts, short words, abbreviations and even spelling mistakes. With the model obtained we represent the words and sentences as vectors, and by applying cosine similarity we identify which concepts of the ontology are named in the text. Finally, we represent the clinical reports (EHR) like a bag of concepts, and use this representation to search similar documents. The paper illustrates 1) how we build the word2vec model from the free text clinical reports, 2) How we extend the embedding from words to sentences, and 3) how we use the cosine similarity to identify concepts. The experimentation, and expert human validation, shows that: a) the concepts named in the text with the ontology terminology are well recognized, and b) others concepts that are not named with the ontology terminology are also recognized, obtaining a high precision and recall measures.",
"title": ""
},
{
"docid": "neg:1840502_9",
"text": "An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. Conceptual and emotional mechanisms of languages are considered here along with their functions in the mind and cultural evolution. A fundamental contradiction in human mind is considered: language evolution requires reduced emotionality, but “too low” emotionality makes language “irrelevant to life,” disconnected from sensory-motor experience. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the language instinct, the dual model connecting language and cognition, dynamic logic, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Approximate equations for evolution of human minds and cultures are obtained. Their solutions identify three types of cultures: \"conceptual\"-pragmatic cultures, in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; “traditional-emotional” cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and “multi-cultural” societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed.",
"title": ""
},
{
"docid": "neg:1840502_10",
"text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.",
"title": ""
},
{
"docid": "neg:1840502_11",
"text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).",
"title": ""
},
{
"docid": "neg:1840502_12",
"text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.",
"title": ""
},
{
"docid": "neg:1840502_13",
"text": "Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCDTIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training. In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of 10% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.",
"title": ""
},
{
"docid": "neg:1840502_14",
"text": "As in any new technology adoption in organizations, big data solutions (BDS) also presents some security threat and challenges, especially due to the characteristics of big data itself the volume, velocity and variety of data. Even though many security considerations associated to the adoption of BDS have been publicized, it remains unclear whether these publicized facts have any actual impact on the adoption of the solutions. Hence, it is the intent of this research-in-progress to examine the security determinants by focusing on the influence that various technological factors in security, organizational security view and security related environmental factors have on BDS adoption. One technology adoption framework, the TOE (technological-organizational-environmental) framework is adopted as the main conceptual research framework. This research will be conducted using a Sequential Explanatory Mixed Method approach. Quantitative method will be used for the first part of the research, specifically using an online questionnaire survey. The result of this first quantitative process will then be further explored and complemented with a case study. Results generated from both quantitative and qualitative phases will then be triangulated and a cross-study synthesis will be conducted to form the final result and discussion.",
"title": ""
},
{
"docid": "neg:1840502_15",
"text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.",
"title": ""
},
{
"docid": "neg:1840502_16",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "neg:1840502_17",
"text": "Implanted sensors and actuators in the human body promise in-situ health monitoring and rapid advancements in personalized medicine. We propose a new paradigm where such implants may communicate wirelessly through a technique called as galvanic coupling, which uses weak electrical signals and the conduction properties of body tissues. While galvanic coupling overcomes the problem of massive absorption of RF waves in the body, the unique intra-body channel raises several questions on the topology of the implants and the external (i.e., on skin) data collection nodes. This paper makes the first contributions towards (i) building an energy-efficient topology through optimal placement of data collection points/relays using measurement-driven tissue channel models, and (ii) balancing the energy consumption over the entire implant network so that the application needs are met. We achieve this via a two-phase iterative clustering algorithm for the implants and formulate an optimization problem that decides the position of external data-gathering points. Our theoretical results are validated via simulations and experimental studies on real tissues, with demonstrated increase in the network lifetime.",
"title": ""
},
{
"docid": "neg:1840502_18",
"text": "Academic search engines and digital libraries provide convenient online search and access facilities for scientific publications. However, most existing systems do not include books in their collections although several books are freely available online. Academic books are different from papers in terms of their length, contents and structure. We argue that accounting for academic books is important in understanding and assessing scientific impact. We introduce an open-book search engine that extracts and indexes metadata, contents, and bibliography from online PDF book documents. To the best of our knowledge, no previous work gives a systematical study on building a search engine for books.\n We propose a hybrid approach for extracting title and authors from a book that combines results from CiteSeer, a rule based extractor, and a SVM based extractor, leveraging web knowledge. For \"table of contents\" recognition, we propose rules based on multiple regularities based on numbering and ordering. In addition, we study bibliography extraction and citation parsing for a large dataset of books. Finally, we use the multiple fields available in books to rank books in response to search queries. Our system can effectively extract metadata and contents from large collections of online books and provides efficient book search and retrieval facilities.",
"title": ""
}
] |
1840503 | Emotional disorders: cluster 4 of the proposed meta-structure for DSM-V and ICD-11. | [
{
"docid": "pos:1840503_0",
"text": "Epidemiologic studies indicate that children exposed to early adverse experiences are at increased risk for the development of depression, anxiety disorders, or both. Persistent sensitization of central nervous system (CNS) circuits as a consequence of early life stress, which are integrally involved in the regulation of stress and emotion, may represent the underlying biological substrate of an increased vulnerability to subsequent stress as well as to the development of depression and anxiety. A number of preclinical studies suggest that early life stress induces long-lived hyper(re)activity of corticotropin-releasing factor (CRF) systems as well as alterations in other neurotransmitter systems, resulting in increased stress responsiveness. Many of the findings from these preclinical studies are comparable to findings in adult patients with mood and anxiety disorders. Emerging evidence from clinical studies suggests that exposure to early life stress is associated with neurobiological changes in children and adults, which may underlie the increased risk of psychopathology. Current research is focused on strategies to prevent or reverse the detrimental effects of early life stress on the CNS. The identification of the neurobiological substrates of early adverse experience is of paramount importance for the development of novel treatments for children, adolescents, and adults.",
"title": ""
}
] | [
{
"docid": "neg:1840503_0",
"text": "Smartphones, the devices we carry everywhere with us, are being heavily tracked and have undoubtedly become a major threat to our privacy. As “Tracking the trackers” has become a necessity, various static and dynamic analysis tools have been developed in the past. However, today, we still lack suitable tools to detect, measure and compare the ongoing tracking across mobile OSs. To this end, we propose MobileAppScrutinator, based on a simple yet efficient dynamic analysis approach, that works on both Android and iOS (the two most popular OSs today). To demonstrate the current trend in tracking, we select 140 most representative Apps available on both Android and iOS AppStores and test them with MobileAppScrutinator. In fact, choosing the same set of apps on both Android and iOS also enables us to compare the ongoing tracking on these two OSs. Finally, we also discuss the effectiveness of privacy safeguards available on Android and iOS. We show that neither Android nor iOS privacy safeguards in their present state are completely satisfying.",
"title": ""
},
{
"docid": "neg:1840503_1",
"text": "OBJECTIVE\nResearch in both animals and humans indicates that cannabidiol (CBD) has antipsychotic properties. The authors assessed the safety and effectiveness of CBD in patients with schizophrenia.\n\n\nMETHOD\nIn an exploratory double-blind parallel-group trial, patients with schizophrenia were randomized in a 1:1 ratio to receive CBD (1000 mg/day; N=43) or placebo (N=45) alongside their existing antipsychotic medication. Participants were assessed before and after treatment using the Positive and Negative Syndrome Scale (PANSS), the Brief Assessment of Cognition in Schizophrenia (BACS), the Global Assessment of Functioning scale (GAF), and the improvement and severity scales of the Clinical Global Impressions Scale (CGI-I and CGI-S).\n\n\nRESULTS\nAfter 6 weeks of treatment, compared with the placebo group, the CBD group had lower levels of positive psychotic symptoms (PANSS: treatment difference=-1.4, 95% CI=-2.5, -0.2) and were more likely to have been rated as improved (CGI-I: treatment difference=-0.5, 95% CI=-0.8, -0.1) and as not severely unwell (CGI-S: treatment difference=-0.3, 95% CI=-0.5, 0.0) by the treating clinician. Patients who received CBD also showed greater improvements that fell short of statistical significance in cognitive performance (BACS: treatment difference=1.31, 95% CI=-0.10, 2.72) and in overall functioning (GAF: treatment difference=3.0, 95% CI=-0.4, 6.4). CBD was well tolerated, and rates of adverse events were similar between the CBD and placebo groups.\n\n\nCONCLUSIONS\nThese findings suggest that CBD has beneficial effects in patients with schizophrenia. As CBD's effects do not appear to depend on dopamine receptor antagonism, this agent may represent a new class of treatment for the disorder.",
"title": ""
},
{
"docid": "neg:1840503_2",
"text": "This paper describes a customized database and a comprehensive set of queries that can be used for systematic benchmarking of relational database systems. Designing this database and a set of carefully tuned benchmarks represents a first attempt in developing a scientific methodology for performance evaluation of database management systems. We have used this database to perform a comparative evaluation of the database machine DIRECT, the \"university\" and \"commercial\" versions of the INGRES database system, the relational database system ORACLE, and the IDM 500 database machine. We present a subset of our measurements (for the single user case only), that constitute a preliminary performance evaluation of these systems.",
"title": ""
},
{
"docid": "neg:1840503_3",
"text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.",
"title": ""
},
{
"docid": "neg:1840503_4",
"text": "A novel planar inverted-F antenna (PIFA) is designed in this paper. Compared to the previous PIFA, the proposed PIFA can enhance bandwidths and achieve multi-band which is loaded with a T-shaped ground plane and etched slots on ground plane and a rectangular patch. It covered 4 service bands, including GSM900, DCS1800, PCS1900 and ISM2450 under the criteria -7 dB return loss for the first band and -10 dB for the last bands. Process of designing and calculation of parameters are presented in detail. The simulation results showed that each band has good characteristics and the bandwidth has been greatly expanded.",
"title": ""
},
{
"docid": "neg:1840503_5",
"text": "The standardization and performance testing of analysis tools is a prerequisite to widespread adoption of genome-wide sequencing, particularly in the clinic. However, performance testing is currently complicated by the paucity of standards and comparison metrics, as well as by the heterogeneity in sequencing platforms, applications and protocols. Here we present the genome comparison and analytic testing (GCAT) platform to facilitate development of performance metrics and comparisons of analysis tools across these metrics. Performance is reported through interactive visualizations of benchmark and performance testing data, with support for data slicing and filtering. The platform is freely accessible at http://www.bioplanet.com/gcat.",
"title": ""
},
{
"docid": "neg:1840503_6",
"text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.",
"title": ""
},
{
"docid": "neg:1840503_7",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "neg:1840503_8",
"text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.",
"title": ""
},
{
"docid": "neg:1840503_9",
"text": "Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.",
"title": ""
},
{
"docid": "neg:1840503_10",
"text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.",
"title": ""
},
{
"docid": "neg:1840503_11",
"text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.",
"title": ""
},
{
"docid": "neg:1840503_12",
"text": "The mapping of lab tests to the Laboratory Test Code controlled terminology in CDISC-SDTM § can be a challenge. One has to find candidates in the extensive controlled terminology list. Then there can be multiple lab tests that map to a single SDTM controlled term. This means additional variables must be used in order to produce a unique test definition (e.g. LBCAT, LBSPEC, LBMETHOD and/or LBELTM). Finally, it can occur that a controlled term is not available and a code needs to be defined in agreement with the rules for Lab tests. This paper describes my experience with the implementation of SDTM controlled terminology for lab tests during an SDTM conversion activity. In six clinical studies 124 lab tests were mapped to 101 SDTM controlled terms. The lab tests included routine lab parameters, coagulation parameters, hormones, glucose tolerance test and pregnancy test. INTRODUCTION This paper aims to give detailed examples of SDTM LB datasets that were created for six studies included in an FDA submission. Background information on the conversion project that formed the context of this work can be found in an earlier PhUSE contribution [1]. With the exception of part of the hormone data all laboratory data of these studies had been extracted from the Oracle Clinical TM NORMLAB2 system, which delivered complete and standardized lab data, i.e. standardized parameter (lab test) names, values, units and ranges. Subsequently, these NORMLAB2 extracts had been enriched with derived variables and records, following internal data standards and conventions, to form standardized analysis-ready datasets. These were the basis for conversion to SDTM LB datasets. The combined source datasets of the six studies held 124 distinct lab tests, which were mapped to 101 distinct lab controlled terms. Controlled terminology for lab tests is part of the SDTM terminology, which is published on the NCI EVS website [2]. New lab test terms have been released for public review through a series of packages [3], starting in 2007. Since version 3.1.2. of the SDTM Implementation Guide [4], the use of SDTM controlled terminology for lab tests is assumed for LBTESTCD and LBTEST (codelists C65047 and C67154). Table 1 provides an overview of the number of lab tests per study in the source data vs. the SDTM datasets (i.e. the number of LBTEST/LBTESTCD codes) and shows how these codes were distributed across different lab test categories. A set of 22 ‘routine safety parameters’ occurred in all four phase III studies (001-004), with 16 tests occurring in all six studies. § Clinical Data Interchange Standards Consortium Study Data Tabulation Model δ National Cancer Institute Enterprise Vocabulary Services",
"title": ""
},
{
"docid": "neg:1840503_13",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840503_14",
"text": "The amidase activities of two Aminobacter sp. strains (DSM24754 and DSM24755) towards the aryl-substituted substrates phenylhydantoin, indolylmethyl hydantoin, D,L-6-phenyl-5,6-dihydrouracil (PheDU) and para-chloro-D,L-6-phenyl-5,6-dihydrouracil were compared. Both strains showed hydantoinase and dihydropyrimidinase activity by hydrolyzing all substrates to the corresponding N-carbamoyl-α- or N-carbamoyl-β-amino acids. However, carbamoylase activity and thus a further degradation of these products to α- and β-amino acids was not detected. Additionally, the genes coding for a dihydropyrimidinase and a carbamoylase of Aminobacter sp. DSM24754 were elucidated. For Aminobacter sp. DSM24755 a dihydropyrimidinase gene flanked by two genes coding for putative ABC transporter proteins was detected. The deduced amino acid sequences of both dihydropyrimidinases are highly similar to the well-studied dihydropyrimidinase of Sinorhizobium meliloti CECT4114. The latter enzyme is reported to accept substituted hydantoins and dihydropyrimidines as substrates. The deduced amino acid sequence of the carbamoylase gene shows a high similarity to the very thermostable enzyme of Pseudomonas sp. KNK003A.",
"title": ""
},
{
"docid": "neg:1840503_15",
"text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.",
"title": ""
},
{
"docid": "neg:1840503_16",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "neg:1840503_17",
"text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.",
"title": ""
},
{
"docid": "neg:1840503_18",
"text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.",
"title": ""
},
{
"docid": "neg:1840503_19",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
}
] |
1840504 | Online Controlled Experiments and A / B Tests | [
{
"docid": "pos:1840504_0",
"text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.",
"title": ""
}
] | [
{
"docid": "neg:1840504_0",
"text": "In this paper, we investigate the use of decentralized blockchain mechanisms for delivering transparent, secure, reliable, and timely energy flexibility, under the form of adaptation of energy demand profiles of Distributed Energy Prosumers, to all the stakeholders involved in the flexibility markets (Distribution System Operators primarily, retailers, aggregators, etc.). In our approach, a blockchain based distributed ledger stores in a tamper proof manner the energy prosumption information collected from Internet of Things smart metering devices, while self-enforcing smart contracts programmatically define the expected energy flexibility at the level of each prosumer, the associated rewards or penalties, and the rules for balancing the energy demand with the energy production at grid level. Consensus based validation will be used for demand response programs validation and to activate the appropriate financial settlement for the flexibility providers. The approach was validated using a prototype implemented in an Ethereum platform using energy consumption and production traces of several buildings from literature data sets. The results show that our blockchain based distributed demand side management can be used for matching energy demand and production at smart grid level, the demand response signal being followed with high accuracy, while the amount of energy flexibility needed for convergence is reduced.",
"title": ""
},
{
"docid": "neg:1840504_1",
"text": "Nowadays, a large amount of documents is generated daily. These documents may contain some spelling errors which should be detected and corrected by using a proofreading tool. Therefore, the existence of automatic writing assistance tools such as spell-checkers/correctors could help to improve their quality. Spelling errors could be categorized into five categories. One of them is real-word errors, which are misspelled words that have been wrongly converted into another word in the language. Detection of such errors requires discourse analysis rather than just checking the word in a dictionary. We propose a discourse-aware discriminative model to improve the results of context-sensitive spell-checkers by reranking their resulted n-best list. We augment the proposed reranker into two existing context-sensitive spell-checker systems; one of them is based on statistical machine translation and the other one is based on language model. We choose the keywords of the whole document as contextual features of the model and improve the results of both systems by employing the features in a log-linear reranker system. We evaluated the system on two different languages: English and Persian. The results of the experiments in English language on the Wall street journal test set show improvements of 4.5% and 5.2% in detection and correction recall, respectively, in comparison to the baseline method. The mentioned improvement on recall metric was achieved with comparable precision. We also achieve state-of-the-art performance on the Persian language. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "neg:1840504_2",
"text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.",
"title": ""
},
{
"docid": "neg:1840504_3",
"text": "We propose in this work a general procedure to efficient EM-based design of single-layer SIW interconnects, including their transitions to microstrip lines. Our starting point is developed by exploiting available empirical knowledge for SIW. We propose an efficient SIW surrogate model for direct EM design optimization in two stages: first optimizing the SIW width to achieve the specified low cutoff frequency, followed by the transition optimization to reduce reflections and extend the dominant mode bandwidth. Our procedure is illustrated by designing a SIW interconnect on a standard FR4-based substrate.",
"title": ""
},
{
"docid": "neg:1840504_4",
"text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.",
"title": ""
},
{
"docid": "neg:1840504_5",
"text": "In this paper we propose a novel, passive approach for detecting and tracking malicious flux service networks. Our detection system is based on passive analysis of recursive DNS (RDNS) traffic traces collected from multiple large networks. Contrary to previous work, our approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, our approach is able to detect malicious flux service networks in-the-wild, i.e., as they are accessed by users who fall victims of malicious content advertised through blog spam, instant messaging spam, social website spam, etc., beside email spam. We experiment with the RDNS traffic passively collected at two large ISP networks. Overall, our sensors monitored more than 2.5 billion DNS queries per day from millions of distinct source IPs for a period of 45 days. Our experimental results show that the proposed approach is able to accurately detect malicious flux service networks. Furthermore, we show how our passive detection and tracking of malicious flux service networks may benefit spam filtering applications.",
"title": ""
},
{
"docid": "neg:1840504_6",
"text": "Purpose – This paper aims to survey the web sites of the academic libraries of the Association of Research Libraries (USA) regarding the adoption of Web 2.0 technologies. Design/methodology/approach – The websites of 100 member academic libraries of the Association of Research Libraries (USA) were surveyed. Findings – All libraries were found to be using various tools of Web 2.0. Blogs, microblogs, RSS, instant messaging, social networking sites, mashups, podcasts, and vodcasts were widely adopted, while wikis, photo sharing, presentation sharing, virtual worlds, customized webpage and vertical search engines were used less. Libraries were using these tools for sharing news, marketing their services, providing information literacy instruction, providing information about print and digital resources, and soliciting feedback of users. Originality/value – The paper is useful for future planning of Web 2.0 use in academic libraries.",
"title": ""
},
{
"docid": "neg:1840504_7",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840504_8",
"text": "Background: Breast cancer is a major public health problem globally. The ongoing epidemiological, socio-cultural\nand demographic transition by accentuating the associated risk factors has disproportionately increased the incidence\nof breast cancer cases and resulting mortality in developing countries like India. Early diagnosis with rapid initiation\nof treatment reduces breast cancer mortality. Therefore awareness of breast cancer risk and a willingness to undergo\nscreening are essential. The objective of the present study was to assess the knowledge and practices relating to screening\nfor breast cancer among women in Delhi. Methods: Data were obtained from 222 adult women using a pretested selfadministered\nquestionnaire. Results: Rates for knowledge of known risk factors of breast cancer were: family history\nof breast cancer, 59.5%; smoking, 57.7%; old age, 56.3%; lack of physical exercise, 51.9%; lack of breastfeeding,\n48.2%; late menopause, 37.4%; and early menarche, 34.7%. Women who were aged < 30 and those who were unmarried\nregistered significantly higher knowledge scores (p ≤ 0.01). Breast self-examination (BSE) was regularly practiced\nat-least once a month by 41.4% of the participants. Some 48% knew mammography has a role in the early detection\nof breast cancer. Since almost three-fourths of the participants believed BSE could help in early diagnosis of breast\ncancer, which is not supported by evidence, future studies should explore the consequences of promoting BSE at the\npotential expense of screening mammography. Conclusion: Our findings highlight the need for awareness generation\namong adult women regarding risk factors and methods for early detection of breast cancer.",
"title": ""
},
{
"docid": "neg:1840504_9",
"text": "Mines deployed in post-war countries pose severe threats to civilians and hamper the reconstruction effort in war hit societies. In the scope of the EU FP7 TIRAMISU Project, a toolbox for humanitarian demining missions is being developed by the consortium members. In this article we present the FSR Husky, an affordable, lightweight and autonomous all terrain robotic system, developed to assist human demining operation teams. Intended to be easily deployable on the field, our robotic solution has the ultimate goal of keeping humans away from the threat, safeguarding their lives. A detailed description of the modular robotic system architecture is presented, and several real world experiments are carried out to validate the robot’s functionalities and illustrate continuous work in progress on minefield coverage, mine detection, outdoor localization, navigation, and environment perception. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840504_10",
"text": "We construct a polyhedron that is topologically convex (i.e., has the graph of a convex polyhedron) yet has no vertex unfolding: no matter how we cut along the edges and keep faces attached at vertices to form a connected (hinged) surface, the surface necessarily unfolds with overlap.",
"title": ""
},
{
"docid": "neg:1840504_11",
"text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.",
"title": ""
},
{
"docid": "neg:1840504_12",
"text": "This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3\".",
"title": ""
},
{
"docid": "neg:1840504_13",
"text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.",
"title": ""
},
{
"docid": "neg:1840504_14",
"text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.",
"title": ""
},
{
"docid": "neg:1840504_15",
"text": "Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.",
"title": ""
},
{
"docid": "neg:1840504_16",
"text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.",
"title": ""
},
{
"docid": "neg:1840504_17",
"text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.",
"title": ""
},
{
"docid": "neg:1840504_18",
"text": "Grice (1957) drew a famous distinction between natural(N) and non-natural(NN) meaning, where what is meant(NN) is broadly equivalent to what is intentionally communicated. This paper argues that Grice’s dichotomy overlooks the fact that spontaneously occurring natural signs may be intentionally shown , and hence used in intentional communication. It also argues that some naturally occurring behaviours have a signalling function, and that the existence of such natural codes provides further evidence that Grice’s original distinction was not exhaustive. The question of what kind of information, in cognitive terms, these signals encode is also examined.",
"title": ""
},
{
"docid": "neg:1840504_19",
"text": "The purpose of this systematic analysis of nursing simulation literature between 2000 -2007 was to determine how learning theory was used to design and assess learning that occurs in simulations. Out of the 120 articles in which designing nursing simulations was reported, 16 referenced learning or developmental theory as the basis of how and why they set up the simulation. Of the 16 articles that used a learning type of foundation, only two considered learning as a cognitive task. More research is needed that investigates the efficacy of simulation for improving student learning. The study concludes that most nursing faculty approach simulation from a teaching paradigm rather than a learning paradigm. For simulation to foster student learning there must be a fundamental shift from a teaching paradigm to a learning paradigm and a foundational learning theory to design and evaluate simulation should be used. Examples of how to match simulation with learning theory are included.",
"title": ""
}
] |
1840505 | Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation | [
{
"docid": "pos:1840505_0",
"text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.",
"title": ""
}
] | [
{
"docid": "neg:1840505_0",
"text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.",
"title": ""
},
{
"docid": "neg:1840505_1",
"text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move",
"title": ""
},
{
"docid": "neg:1840505_2",
"text": "The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.",
"title": ""
},
{
"docid": "neg:1840505_3",
"text": "This paper uses an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP). The RAP is a well known NP-hard problem which has been the subject of much prior work, generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. Meta-heuristic methods overcome this limitation, and offer a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. The ant colony method has not yet been used in reliability design, yet it is a method that is expressly designed for combinatorial problems with a neighborhood structure, as in the case of the RAP. An ant colony optimization algorithm for the RAP is devised & tested on a well-known suite of problems from the literature. It is shown that the ant colony method performs with little variability over problem instance or random number seed. It is competitive with the best-known heuristics for redundancy allocation.",
"title": ""
},
{
"docid": "neg:1840505_4",
"text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.",
"title": ""
},
{
"docid": "neg:1840505_5",
"text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.",
"title": ""
},
{
"docid": "neg:1840505_6",
"text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.",
"title": ""
},
{
"docid": "neg:1840505_7",
"text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification",
"title": ""
},
{
"docid": "neg:1840505_8",
"text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.",
"title": ""
},
{
"docid": "neg:1840505_9",
"text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.",
"title": ""
},
{
"docid": "neg:1840505_10",
"text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.",
"title": ""
},
{
"docid": "neg:1840505_11",
"text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.",
"title": ""
},
{
"docid": "neg:1840505_12",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "neg:1840505_13",
"text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"title": ""
},
{
"docid": "neg:1840505_14",
"text": "The next generation of multimedia services have to be optimized in a personalized way, taking user factors into account for the evaluation of individual experience. Previous works have investigated the influence of user factors mostly in a controlled laboratory environment which often includes a limited number of users and fails to reflect real-life environment. Social media, especially Facebook, provide an interesting alternative for Internet-based subjective evaluation. In this article, we develop (and open-source) a Facebook application, named YouQ1, as an experimental platform for studying individual experience for videos. Our results show that subjective experiments based on YouQ can produce reliable results as compared to a controlled laboratory experiment. Additionally, YouQ has the ability to collect user information automatically from Facebook, which can be used for modeling individual experience.",
"title": ""
},
{
"docid": "neg:1840505_15",
"text": "This paper introduces a new email dataset, consisting of both single and thread emails, manually annotated with summaries and keywords. A total of 349 emails and threads have been annotated. The dataset is our first step toward developing automatic methods for summarization and keyword extraction from emails. We describe the email corpus, along with the annotation interface, annotator guidelines, and agreement studies.",
"title": ""
},
{
"docid": "neg:1840505_16",
"text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.",
"title": ""
},
{
"docid": "neg:1840505_17",
"text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.",
"title": ""
},
{
"docid": "neg:1840505_18",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
},
{
"docid": "neg:1840505_19",
"text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.",
"title": ""
}
] |
1840506 | What recommenders recommend: an analysis of recommendation biases and possible countermeasures | [
{
"docid": "pos:1840506_0",
"text": "Although the broad social and business success of recommender systems has been achieved across several domains, there is still a long way to go in terms of user satisfaction. One of the key dimensions for significant improvement is the concept of unexpectedness. In this article, we propose a method to improve user satisfaction by generating unexpected recommendations based on the utility theory of economics. In particular, we propose a new concept of unexpectedness as recommending to users those items that depart from what they would expect from the system - the consideration set of each user. We define and formalize the concept of unexpectedness and discuss how it differs from the related notions of novelty, serendipity, and diversity. In addition, we suggest several mechanisms for specifying the users’ expectations and propose specific performance metrics to measure the unexpectedness of recommendation lists. We also take into consideration the quality of recommendations using certain utility functions and present an algorithm for providing users with unexpected recommendations of high quality that are hard to discover but fairly match their interests. Finally, we conduct several experiments on “real-world” datasets and compare our recommendation results with other methods. The proposed approach outperforms these baseline methods in terms of unexpectedness and other important metrics, such as coverage, aggregate diversity and dispersion, while avoiding any accuracy loss.",
"title": ""
},
{
"docid": "pos:1840506_1",
"text": "Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them.\n In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.",
"title": ""
}
] | [
{
"docid": "neg:1840506_0",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "neg:1840506_1",
"text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.",
"title": ""
},
{
"docid": "neg:1840506_2",
"text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.",
"title": ""
},
{
"docid": "neg:1840506_3",
"text": "Flexible structures may fall victim to excessive levels of vibration under the action of wind, adversely affecting serviceability and occupant comfort. To ensure the functional performance of flexible structures, various design modifications are possible, ranging from alternative structural systems to the utilization of passive and active control devices. This paper presents an overview of state-of-the-art measures to reduce structural response of buildings, including a summary of recent work in aerodynamic tailoring and a discussion of auxiliary damping devices for mitigating the wind-induced motion of structures. In addition, some discussion of the application of such devices to improve structural resistance to seismic events is also presented, concluding with detailed examples of the application of auxiliary damping devices in Australia, Canada, China, Japan, and the United States.",
"title": ""
},
{
"docid": "neg:1840506_4",
"text": "Title of Dissertation: Simulation-Based Algorithms for Markov Decision Processes Ying He, Doctor of Philosophy, 2002 Dissertation directed by: Professor Steven I. Marcus Department of Electrical & Computer Engineering Professor Michael C. Fu Department of Decision & Information Technologies Problems of sequential decision making under uncertainty are common in manufacturing, computer and communication systems, and many such problems can be formulated as Markov Decision Processes (MDPs). Motivated by a capacity expansion and allocation problem in semiconductor manufacturing, we formulate a fab-level decision making problem using a finite-horizon transient MDP model that can integrate life cycle dynamics of the fab and provide a trade-off between immediate and future benefits and costs. However, for large and complicated systems formulated as MDPs, the classical methodology to compute optimal policies, dynamic programming, suffers from the so-called “curse of dimensionality” (computational requirement increases exponentially with number of states /controls) and “curse of modeling” (an explicit model for the cost structure and/or the transition probabilities is not available). In problem settings to which our approaches apply, instead of the explicit transition probabilities, outputs are available from either a simulation model or from the actual system. Our methodology is first to find the structure of optimal policies for some special cases, and then to use the structure to construct parameterized heuristic policies for more general cases and implement simulationbased algorithms to determine parameters of the heuristic policies. For the fab-level decision-making problem, we analyze the structure of the optimal policy for a special “one-machine, two-product” case, and discuss the applicability of simulation-based algorithms. We develop several simulation-based algorithms for MDPs to overcome the difficulties of “curse of dimensionality” and “curse of modeling”, considering both theoretical and practical issues. First, we develop a simulation-based policy iteration algorithm for average cost problems under a unichain assumption, relaxing the common recurrent state assumption. Second, for weighted cost problems, we develop a new two-timescale simulation-based gradient algorithms based on perturbation analysis, provide a theoretical convergence proof, and compare it with two recently proposed simulation-based gradient algorithms. Third, we propose two new Simultaneous Perturbation Stochastic Approximation (SPSA) algorithms for weighted cost problems and verify their effectiveness via simulation; then, we consider a general SPSA algorithm for function minimization and show its convergence under a weaker assumption: the function does not have to be differentiable. To Yingjiu and my parents ...",
"title": ""
},
{
"docid": "neg:1840506_5",
"text": "This paper presents a general theoretical framework for ensemble methods of constructing signiicantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It eeciently uses all the networks of a population-none of the networks need be discarded. 2) It eeciently uses all the available data for training without over-tting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid over-tting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in closed form. Experimental results are provided which show that the ensemble method dramatically improves neural network performance on diicult real-world optical character recognition tasks.",
"title": ""
},
{
"docid": "neg:1840506_6",
"text": "Weather factors such as temperature and rainfall in residential areas and tourist destinations affect traffic flow on the surrounding roads. In this study, we attempt to find new knowledge between traffic congestion and weather by using big data processing technology. Changes in traffic congestion due to the weather are evaluated by using multiple linear regression analysis to create a prediction model and forecast traffic congestion on a daily basis. For the regression analysis, we use 48 weather forecasting factors and six dummy variables to express the days of the week. The final multiple linear regression model is then proposed based on the three analytical steps of (i) the creation of the full regression model, (ii) the removal of the variables, and (iii) residual analysis. We find that the R-squared value of the proposed model has an explanatory power of 0.6555. To verify its predictability, the proposed model then evaluates traffic congestion in July and August 2014 by comparing predicted traffic congestion with actual traffic congestion. By using the mean absolute percentage error valuation method, we show that the final multiple linear regression model has a prediction accuracy of 84.8%.",
"title": ""
},
{
"docid": "neg:1840506_7",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "neg:1840506_8",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
},
{
"docid": "neg:1840506_9",
"text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.",
"title": ""
},
{
"docid": "neg:1840506_10",
"text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.",
"title": ""
},
{
"docid": "neg:1840506_11",
"text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.",
"title": ""
},
{
"docid": "neg:1840506_12",
"text": "Loitering is a suspicious behavior that often leads to criminal actions, such as pickpocketing and illegal entry. Tracking methods can determine suspicious behavior based on trajectory, but require continuous appearance and are difficult to scale up to multi-camera systems. Using the duration of appearance of features works on multiple cameras, but does not consider major aspects of loitering behavior, such as repeated appearance and trajectory of candidates. We introduce an entropy model that maps the location of a person's features on a heatmap. It can be used as an abstraction of trajectory tracking across multiple surveillance cameras. We evaluate our method over several datasets and compare it to other loitering detection methods. The results show that our approach has similar results to state of the art, but can provide additional interesting candidates.",
"title": ""
},
{
"docid": "neg:1840506_13",
"text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.",
"title": ""
},
{
"docid": "neg:1840506_14",
"text": "Magnetic resonance imaging (MRI) examinations provide high-resolution information about the anatomic structure of the kidneys and are used to measure total kidney volume (TKV) in patients with Autosomal Dominant Polycystic Kidney Disease (ADPKD). Height-adjusted TKV (HtTKV) has become the gold-standard imaging biomarker for ADPKD progression at early stages of the disease when estimated glomerular filtration rate (eGFR) is still normal. However, HtTKV does not take advantage of the wealth of information provided by MRI. Here we tested whether image texture features provide additional insights into the ADPKD kidney that may be used as complementary information to existing biomarkers. A retrospective cohort of 122 patients from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP) study was identified who had T2-weighted MRIs and eGFR values over 70 mL/min/1.73m2 at the time of their baseline scan. We computed nine distinct image texture features for each patient. The ability of each feature to predict subsequent progression to CKD stage 3A, 3B, and 30% reduction in eGFR at eight-year follow-up was assessed. A multiple linear regression model was developed incorporating age, baseline eGFR, HtTKV, and three image texture features identified by stability feature selection (Entropy, Correlation, and Energy). Including texture in a multiple linear regression model (predicting percent change in eGFR) improved Pearson correlation coefficient from -0.51 (using age, eGFR, and HtTKV) to -0.70 (adding texture). Thus, texture analysis offers an approach to refine ADPKD prognosis and should be further explored for its utility in individualized clinical decision making and outcome prediction.",
"title": ""
},
{
"docid": "neg:1840506_15",
"text": "MicroProteins (miPs) are short, usually single-domain proteins that, in analogy to miRNAs, heterodimerize with their targets and exert a dominant-negative effect. Recent bioinformatic attempts to identify miPs have resulted in a list of potential miPs, many of which lack the defining characteristics of a miP. In this opinion article, we clearly state the characteristics of a miP as evidenced by known proteins that fit the definition; we explain why modulatory proteins misrepresented as miPs do not qualify as true miPs. We also discuss the evolutionary history of miPs, and how the miP concept can extend beyond transcription factors (TFs) to encompass different non-TF proteins that require dimerization for full function.",
"title": ""
},
{
"docid": "neg:1840506_16",
"text": "Using a unique high-frequency futures dataset, we characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. We find that news produces conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. Equity markets, moreover, react differently to news depending on the stage of the business cycle, which explains the low correlation between stock and bond returns when averaged over the cycle. Hence our results qualify earlier work suggesting that bond markets react most strongly to macroeconomic news; in particular, when conditioning on the state of the economy, the equity and foreign Journal of International Economics 73 (2007) 251–277 www.elsevier.com/locate/econbase ☆ This work was supported by the National Science Foundation, the Guggenheim Foundation, the BSI Gamma Foundation, and CREATES. For useful comments we thank the Editor and referees, seminar participants at the Bank for International Settlements, the BSI Gamma Foundation, the Symposium of the European Central Bank/Center for Financial Studies Research Network, the NBER International Finance and Macroeconomics program, and the American Economic Association Annual Meetings, as well as Rui Albuquerque, Annika Alexius, Boragan Aruoba, Anirvan Banerji, Ben Bernanke, Robert Connolly, Jeffrey Frankel, Lingfeng Li, Richard Lyons, Marco Pagano, Paolo Pasquariello, and Neng Wang. ⁎ Corresponding author. Department of Economics, University of Pennsylvania, 3718 Locust Walk Philadelphia, PA 19104-6297, United States. Tel.: +1 215 898 1507; fax: +1 215 573 4217. E-mail addresses: t-andersen@kellogg.nwu.edu (T.G. Andersen), boller@econ.duke.edu (T. Bollerslev), fdiebold@sas.upenn.edu (F.X. Diebold), vega@simon.rochester.edu (C. Vega). 0022-1996/$ see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jinteco.2007.02.004 exchange markets appear equally responsive. Finally, we also document important contemporaneous links across all markets and countries, even after controlling for the effects of macroeconomic news. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840506_17",
"text": "Many promising therapeutic agents are limited by their inability to reach the systemic circulation, due to the excellent barrier properties of biological membranes, such as the stratum corneum (SC) of the skin or the sclera/cornea of the eye and others. The outermost layer of the skin, the SC, is the principal barrier to topically-applied medications. The intact SC thus provides the main barrier to exogenous substances, including drugs. Only drugs with very specific physicochemical properties (molecular weight < 500 Da, adequate lipophilicity, and low melting point) can be successfully administered transdermally. Transdermal delivery of hydrophilic drugs and macromolecular agents of interest, including peptides, DNA, and small interfering RNA is problematic. Therefore, facilitation of drug penetration through the SC may involve by-pass or reversible disruption of SC molecular architecture. Microneedles (MNs), when used to puncture skin, will by-pass the SC and create transient aqueous transport pathways of micron dimensions and enhance the transdermal permeability. These micropores are orders of magnitude larger than molecular dimensions, and, therefore, should readily permit the transport of hydrophilic macromolecules. Various strategies have been employed by many research groups and pharmaceutical companies worldwide, for the fabrication of MNs. This review details various types of MNs, fabrication methods and, importantly, investigations of clinical safety of MN.",
"title": ""
},
{
"docid": "neg:1840506_18",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
},
{
"docid": "neg:1840506_19",
"text": "In multi-label classification in the big data age, the number of classes can be in thousands, and obtaining sufficient training data for each class is infeasible. Zero-shot learning aims at predicting a large number of unseen classes using only labeled data from a small set of classes and external knowledge about class relations. However, previous zero-shot learning models passively accept labeled data collected beforehand, relinquishing the opportunity to select the proper set of classes to inquire labeled data and optimize the performance of unseen class prediction. To resolve this issue, we propose an active class selection strategy to intelligently query labeled data for a parsimonious set of informative classes. We demonstrate two desirable probabilistic properties of the proposed method that can facilitate unseen classes prediction. Experiments on 4 text datasets demonstrate that the active zero-shot learning algorithm is superior to a wide spectrum of baselines. We indicate promising future directions at the end of this paper.",
"title": ""
}
] |
1840507 | The Demographics of Mail Search and their Application to Query Suggestion | [
{
"docid": "pos:1840507_0",
"text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.",
"title": ""
},
{
"docid": "pos:1840507_1",
"text": "People often repeat Web searches, both to find new information on topics they have previously explored and to re-find information they have seen in the past. The query associated with a repeat search may differ from the initial query but can nonetheless lead to clicks on the same results. This paper explores repeat search behavior through the analysis of a one-year Web query log of 114 anonymous users and a separate controlled survey of an additional 119 volunteers. Our study demonstrates that as many as 40% of all queries are re-finding queries. Re-finding appears to be an important behavior for search engines to explicitly support, and we explore how this can be done. We demonstrate that changes to search engine results can hinder re-finding, and provide a way to automatically detect repeat searches and predict repeat clicks.",
"title": ""
}
] | [
{
"docid": "neg:1840507_0",
"text": "A 5-year-old boy was followed up with migratory spermatic cord and a perineal tumour at the paediatric department after birth. He was born by Caesarean section at 38 weeks in viviparity. Weight at birth was 3650 g. Although a meningocele in the sacral region was found by MRI, there were no symptoms in particular and no other deformity was found. When he was 4 years old, he presented to our department with the perinal tumour. On examination, a slender scrotum-like tumour covering the centre of the perineal lesion, along with inflammation and ulceration around the skin of the anus, was observed. Both testes and scrotums were observed in front of the tumour (Figure 1a). An excision of the tumour and Z-plasty of the perineal lesion were performed. The subcutaneous tissue consisted of adipose tissue-like lipoma and was resected along with the tumour (Figure 1b). A Z-plasty was carefully performed in order to maintain the lefteright symmetry of the",
"title": ""
},
{
"docid": "neg:1840507_1",
"text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.",
"title": ""
},
{
"docid": "neg:1840507_2",
"text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.",
"title": ""
},
{
"docid": "neg:1840507_3",
"text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.",
"title": ""
},
{
"docid": "neg:1840507_4",
"text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.",
"title": ""
},
{
"docid": "neg:1840507_5",
"text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.",
"title": ""
},
{
"docid": "neg:1840507_6",
"text": "Community detection emerged as an important exploratory task in complex networks analysis across many scientific domains. Many methods have been proposed to solve this problem, each one with its own mechanism and sometimes with a different notion of community. In this article, we bring most common methods in the literature together in a comparative approach and reveal their performances in both real-world networks and synthetic networks. Surprisingly, many of those methods discovered better communities than the declared ground-truth communities in terms of some topological goodness features, even on benchmarking networks with built-in communities. We illustrate different structural characteristics that these methods could identify in order to support users to choose an appropriate method according to their specific requirements on different structural qualities.",
"title": ""
},
{
"docid": "neg:1840507_7",
"text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.",
"title": ""
},
{
"docid": "neg:1840507_8",
"text": "Routers classify packets to determine which flow they belong to, and to decide what service they should receive. Classification may, in general, be based on an arbitrary number of fields in the packet header. Performing classification quickly on an arbitrary number of fields is known to be difficult, and has poor worst-case performance. In this paper, we consider a number of classifiers taken from real networks. We find that the classifiers contain considerable structure and redundancy that can be exploited by the classification algorithm. In particular, we find that a simple multi-stage classification algorithm, called RFC (recursive flow classification), can classify 30 million packets per second in pipelined hardware, or one million packets per second in software.",
"title": ""
},
{
"docid": "neg:1840507_9",
"text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.",
"title": ""
},
{
"docid": "neg:1840507_10",
"text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.",
"title": ""
},
{
"docid": "neg:1840507_11",
"text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840507_12",
"text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.",
"title": ""
},
{
"docid": "neg:1840507_13",
"text": "Legged locomotion excels when terrains become too rough for wheeled systems or open-loop walking pattern generators to succeed, i.e., when accurate foot placement is of primary importance in successfully reaching the task goal. In this paper we address the scenario where the rough terrain is traversed with a static walking gait, and where for every foot placement of a leg, the location of the foot placement was selected irregularly by a planning algorithm. Our goal is to adjust a smooth walking pattern generator with the selection of every foot placement such that the COG of the robot follows a stable trajectory characterized by a stability margin relative to the current support triangle. We propose a novel parameterization of the COG trajectory based on the current position, velocity, and acceleration of the four legs of the robot. This COG trajectory has guaranteed continuous velocity and acceleration profiles, which leads to continuous velocity and acceleration profiles of the leg movement, which is ideally suited for advanced model-based controllers. Pitch, yaw, and ground clearance of the robot are easily adjusted automatically under any terrain situation. We evaluate our gait generation technique on the Little-Dog quadruped robot when traversing complex rocky and sloped terrains.",
"title": ""
},
{
"docid": "neg:1840507_14",
"text": "Researchers strive to understand eating behavior as a means to develop diets and interventions that can help people achieve and maintain a healthy weight, recover from eating disorders, or manage their diet and nutrition for personal wellness. A major challenge for eating-behavior research is to understand when, where, what, and how people eat. In this paper, we evaluate sensors and algorithms designed to detect eating activities, more specifically, when people eat. We compare two popular methods for eating recognition (based on acoustic and electromyography (EMG) sensors) individually and combined. We built a data-acquisition system using two off-the-shelf sensors and conducted a study with 20 participants. Our preliminary results show that the system we implemented can detect eating with an accuracy exceeding 90.9% while the crunchiness level of food varies. We are developing a wearable system that can capture, process, and classify sensor data to detect eating in real-time.",
"title": ""
},
{
"docid": "neg:1840507_15",
"text": "In this paper we present novel sensory feedbacks named ”King-Kong Effects” to enhance the sensation of walking in virtual environments. King Kong Effects are inspired by special effects in movies in which the incoming of a gigantic creature is suggested by adding visual vibrations/pulses to the camera at each of its steps. In this paper, we propose to add artificial visual or tactile vibrations (King-Kong Effects or KKE) at each footstep detected (or simulated) during the virtual walk of the user. The user can be seated, and our system proposes to use vibrotactile tiles located under his/her feet for tactile rendering, in addition to the visual display. We have designed different kinds of KKE based on vertical or lateral oscillations, physical or metaphorical patterns, and one or two peaks for heal-toe contacts simulation. We have conducted different experiments to evaluate the preferences of users navigating with or without the various KKE. Taken together, our results identify the best choices for future uses of visual and tactile KKE, and they suggest a preference for multisensory combinations. Our King-Kong effects could be used in a variety of VR applications targeting the immersion of a user walking in a 3D virtual scene.",
"title": ""
},
{
"docid": "neg:1840507_16",
"text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.",
"title": ""
},
{
"docid": "neg:1840507_17",
"text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.",
"title": ""
},
{
"docid": "neg:1840507_18",
"text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.",
"title": ""
}
] |
1840508 | Derivation of GFDM based on OFDM principles | [
{
"docid": "pos:1840508_0",
"text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.",
"title": ""
},
{
"docid": "pos:1840508_1",
"text": "Generalized frequency division multiplexing (GFDM) is a new concept that can be seen as a generalization of traditional OFDM. The scheme is based on the filtered multi-carrier approach and can offer an increased flexibility, which will play a significant role in future cellular applications. In this paper we present the benefits of the pulse shaped carriers in GFDM. We show that based on the FFT/IFFT algorithm, the scheme can be implemented with reasonable computational effort. Further, to be able to relate the results to the recent LTE standard, we present a suitable set of parameters for GFDM.",
"title": ""
}
] | [
{
"docid": "neg:1840508_0",
"text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.",
"title": ""
},
{
"docid": "neg:1840508_1",
"text": "Generative adversarial networks have been shown to generate very realistic images by learning through a min-max game. Furthermore, these models are known to model image spaces more easily when conditioned on class labels. In this work, we consider conditioning on fine-grained textual descriptions, thus also enabling us to produce realistic images that correspond to the input text description. Additionally, we consider the task of learning disentangled representations for images through special latent codes, such that we can move them as knobs to alter the generated image. These latent codes take on very interpretable roles and are learnt in a completely unsupervised manner, using ideas from InfoGAN. We show that the learnt latent codes that encode much more variance and semantic interpretability as compared to standard GANs by experimenting on two datasets.",
"title": ""
},
{
"docid": "neg:1840508_2",
"text": "Recurrent urinary tract infections (UTIs) are common, especially in women. Low-dose daily or postcoital antimicrobial prophylaxis is effective for prevention of recurrent UTIs and women can self-diagnose and self-treat a new UTI with antibiotics. The increasing resistance rates of Escherichia coli to antimicrobial agents has, however, stimulated interest in nonantibiotic methods for the prevention of UTIs. This article reviews the literature on efficacy of different forms of nonantibiotic prophylaxis. Future studies with lactobacilli strains (oral and vaginal) and the oral immunostimulant OM-89 are warranted.",
"title": ""
},
{
"docid": "neg:1840508_3",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "neg:1840508_4",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "neg:1840508_5",
"text": "Recently, a number of coding techniques have been reported to achieve near toll quality synthesized speech at bit-rates around 4 kb/s. These include variants of Code Excited Linear Prediction (CELP), Sinusoidal Transform Coding (STC) and Multi-Band Excitation (MBE). While CELP has been an effective technique for bit-rates above 6 kb/s, STC, MBE, Waveform Interpolation (WI) and Mixed Excitation Linear Prediction (MELP) [1, 2] models seem to be attractive at bit-rates below 3 kb/s. In this paper, we present a system to encode speech with high quality using MELP, a technique previously demonstrated to be effective at bit-rates of 1.6–2.4 kb/s. We have enhanced the MELP model producing significantly higher speech quality at bit-rates above 2.4 kb/s. We describe the development and testing of a high quality 4 kb/s MELP coder.",
"title": ""
},
{
"docid": "neg:1840508_6",
"text": "Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field.",
"title": ""
},
{
"docid": "neg:1840508_7",
"text": "We present Spectrogram, a machine learning based statistical anomaly detection (AD) sensor for defense against web-layer code-injection attacks. These attacks include PHP file inclusion, SQL-injection and cross-sitescripting; memory-layer exploits such as buffer overflows are addressed as well. Statistical AD sensors offer the advantage of being driven by the data that is being protected and not by malcode samples captured in the wild. While models using higher order statistics can often improve accuracy, trade-offs with false-positive rates and model efficiency remain a limiting usability factor. This paper presents a new model and sensor framework that offers a favorable balance under this constraint and demonstrates improvement over some existing approaches. Spectrogram is a network situated sensor that dynamically assembles packets to reconstruct content flows and learns to recognize legitimate web-layer script input. We describe an efficient model for this task in the form of a mixture of Markovchains and derive the corresponding training algorithm. Our evaluations show significant detection results on an array of real world web layer attacks, comparing favorably against other AD approaches.",
"title": ""
},
{
"docid": "neg:1840508_8",
"text": "OBJECTIVE\nTo investigate the efficacy of home-based specific stabilizing exercises focusing on the local stabilizing muscles as the only intervention in the treatment of persistent postpartum pelvic girdle pain.\n\n\nDESIGN\nA prospective, randomized, single-blinded, clinically controlled study.\n\n\nSUBJECTS\nEighty-eight women with pelvic girdle pain were recruited 3 months after delivery.\n\n\nMETHODS\nThe treatment consisted of specific stabilizing exercises targeting the local trunk muscles. The reference group had a single telephone contact with a physiotherapist. Primary outcome was disability measured with Oswestry Disability Index. Secondary outcomes were pain, health-related quality of life (EQ-5D), symptom satisfaction, and muscle function.\n\n\nRESULTS\nNo significant differences between groups could be found at 3- or 6-month follow-up regarding primary outcome in disability. Within-group comparisons showed some improvement in both groups in terms of disability, pain, symptom satisfaction and muscle function compared with baseline, although the majority still experienced pelvic girdle pain.\n\n\nCONCLUSION\nTreatment with this home-training concept of specific stabilizing exercises targeting the local muscles was no more effective in improving consequences of persistent postpartum pelvic girdle pain than the clinically natural course. Regardless of whether treatment with specific stabilizing exercises was carried out, the majority of women still experienced some back pain almost one year after pregnancy.",
"title": ""
},
{
"docid": "neg:1840508_9",
"text": "There is a convergence in recent theories of creativity that go beyond characteristics and cognitive processes of individuals to recognize the importance of the social construction of creativity. In parallel, there has been a rise in social computing supporting the collaborative construction of knowledge. The panel will discuss the challenges and opportunities from the confluence of these two developments by bringing together the contrasting and controversial perspective of the individual panel members. It will synthesize from different perspectives an analytic framework to understand these new developments, and how to promote rigorous research methods and how to identify the unique challenges in developing evaluation and assessment methods for creativity research.",
"title": ""
},
{
"docid": "neg:1840508_10",
"text": "Many functional network properties of the human brain have been identified during rest and task states, yet it remains unclear how the two relate. We identified a whole-brain network architecture present across dozens of task states that was highly similar to the resting-state network architecture. The most frequent functional connectivity strengths across tasks closely matched the strengths observed at rest, suggesting this is an \"intrinsic,\" standard architecture of functional brain organization. Furthermore, a set of small but consistent changes common across tasks suggests the existence of a task-general network architecture distinguishing task states from rest. These results indicate the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest, and secondarily by evoked task-general and task-specific network changes. This establishes a strong relationship between resting-state functional connectivity and task-evoked functional connectivity-areas of neuroscientific inquiry typically considered separately.",
"title": ""
},
{
"docid": "neg:1840508_11",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840508_12",
"text": "Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.",
"title": ""
},
{
"docid": "neg:1840508_13",
"text": "Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (GLMS) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on GLM parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm - the \"randomise\" algorithm - for permutation inference with the GLM.",
"title": ""
},
{
"docid": "neg:1840508_14",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "neg:1840508_15",
"text": "An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.",
"title": ""
},
{
"docid": "neg:1840508_16",
"text": "Hashing seeks an embedding of high-dimensional objects into a similarity-preserving low-dimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (representation) of data. However, objects are often described by multiple representations. For instance, images are described by a few different visual descriptors (such as SIFT, GIST, and HOG), so it is desirable to incorporate multiple representations into hashing, leading to multi-view hashing. In this paper we present a deep network for multi-view hashing, referred to as deep multi-view hashing, where each layer of hidden nodes is composed of view-specific and shared hidden nodes, in order to learn individual and shared hidden spaces from multiple views of data. Numerical experiments on image datasets demonstrate the useful behavior of our deep multi-view hashing (DMVH), compared to recently-proposed multi-modal deep network as well as existing shallow models of hashing.",
"title": ""
},
{
"docid": "neg:1840508_17",
"text": "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.",
"title": ""
},
{
"docid": "neg:1840508_18",
"text": "Critical infrastructure components nowadays use microprocessor-based embedded control systems. It is often infeasible, however, to employ the same level of security measures used in general purpose computing systems, due to the stringent performance and resource constraints of embedded control systems. Furthermore, as software sits atop and relies on the firmware for proper operation, software-level techniques cannot detect malicious behavior of the firmware. In this work, we propose ConFirm, a low-cost technique to detect malicious modifications in the firmware of embedded control systems by measuring the number of low-level hardware events that occur during the execution of the firmware. In order to count these events, ConFirm leverages the Hardware Performance Counters (HPCs), which readily exist in many embedded processors. We evaluate the detection capability and performance overhead of the proposed technique on various types of firmware running on ARM- and PowerPC-based embedded processors. Experimental results demonstrate that ConFirm can detect all the tested modifications with low performance overhead.",
"title": ""
},
{
"docid": "neg:1840508_19",
"text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.",
"title": ""
}
] |
1840509 | Architectures for deep neural network based acoustic models defined over windowed speech waveforms | [
{
"docid": "pos:1840509_0",
"text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.",
"title": ""
}
] | [
{
"docid": "neg:1840509_0",
"text": "A cascadable power-on-reset (POR) delay element consuming nanowatt of peak power was developed to be used in very compact power-on-reset pulse generator (POR-PG) circuits. Operation principles and features of the POR delay element were presented in this paper. The delay element was designed, and fabricated in a 0.5µm 2P3M CMOS process. It was determined from simulation as well as measurement results that the delay element works wide supply voltage ranges between 1.8 volt and 5 volt and supply voltage rise times between 100nsec and 1msec allowing wide dynamic range POR-PG circuits. It also has very small silicon footprint. Layout size of a single POR delay element was 35µm x 55µm in 0.5µm CMOS process.",
"title": ""
},
{
"docid": "neg:1840509_1",
"text": "This paper demonstrates the co-optimization of all critical device parameters of perpendicular magnetic tunnel junctions (pMTJ) in 1 Gbit arrays with an equivalent bitcell size of 22 F2 at the 28 nm logic node for embedded STT-MRAM. Through thin-film tuning and advanced etching of sub-50 nm (diameter) pMTJ, high device performance and reliability were achieved simultaneously, including TMR = 150 %, Hc > 1350 Oe, Heff <; 100 Oe, Δ = 85, Ic (35 ns) = 94 μA, Vbreakdown = 1.5 V, and high endurance (> 1012 write cycles). Reliable switching with small temporal variations (<; 5 %) was obtained down to 10 ns. In addition, tunnel barrier integrity and high temperature device characteristics were investigated in order to ensure reliable STT-MRAM operation.",
"title": ""
},
{
"docid": "neg:1840509_2",
"text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.",
"title": ""
},
{
"docid": "neg:1840509_3",
"text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.",
"title": ""
},
{
"docid": "neg:1840509_4",
"text": "We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions. Source code and models are available at https://imatge-upc.github.io/detection-2016-nipsws/.",
"title": ""
},
{
"docid": "neg:1840509_5",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "neg:1840509_6",
"text": "The rise of Natural Language Processing (NLP) opened new possibilities for various applications that were not applicable before. A morphological-rich language such as Arabic introduces a set of features, such as roots, that would assist the progress of NLP. Many tools were developed to capture the process of root extraction (stemming). Stemmers have improved many NLP tasks without explicit knowledge about its stemming accuracy. In this paper, a study is conducted to evaluate various Arabic stemmers. The study is done as a series of comparisons using a manually annotated dataset, which shows the efficiency of Arabic stemmers, and points out potential improvements to existing stemmers. The paper also presents enhanced root extractors by using light stemmers as a preprocessing phase.",
"title": ""
},
{
"docid": "neg:1840509_7",
"text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.",
"title": ""
},
{
"docid": "neg:1840509_8",
"text": "Atherosclerosis is a chronic inflammatory disease, and is the primary cause of heart disease and stroke in Western countries. Derivatives of cannabinoids such as delta-9-tetrahydrocannabinol (THC) modulate immune functions and therefore have potential for the treatment of inflammatory diseases. We investigated the effects of THC in a murine model of established atherosclerosis. Oral administration of THC (1 mg kg-1 per day) resulted in significant inhibition of disease progression. This effective dose is lower than the dose usually associated with psychotropic effects of THC. Furthermore, we detected the CB2 receptor (the main cannabinoid receptor expressed on immune cells) in both human and mouse atherosclerotic plaques. Lymphoid cells isolated from THC-treated mice showed diminished proliferation capacity and decreased interferon-γ secretion. Macrophage chemotaxis, which is a crucial step for the development of atherosclerosis, was also inhibited in vitro by THC. All these effects were completely blocked by a specific CB2 receptor antagonist. Our data demonstrate that oral treatment with a low dose of THC inhibits atherosclerosis progression in the apolipoprotein E knockout mouse model, through pleiotropic immunomodulatory effects on lymphoid and myeloid cells. Thus, THC or cannabinoids with activity at the CB2 receptor may be valuable targets for treating atherosclerosis.",
"title": ""
},
{
"docid": "neg:1840509_9",
"text": "The magnitude of recent combat blast injuries sustained by forces fighting in Afghanistan has escalated to new levels with more troops surviving higher-energy trauma. The most complex and challenging injury pattern is the emerging frequency of high-energy IED casualties presenting in extremis with traumatic bilateral lower extremity amputations with and without pelvic and perineal blast involvement. These patients require a coordinated effort of advanced trauma and surgical care from the point of injury through definitive management. Early survival is predicated upon a balance of life-saving damage control surgery and haemostatic resuscitation. Emergent operative intervention is critical with timely surgical hemostasis, adequate wound decontamination, revision amputations, and pelvic fracture stabilization. Efficient index surgical management is paramount to prevent further physiologic insult, and a team of orthopaedic and general surgeons operating concurrently may effectively achieve this. Despite the extent and complexity, these are survivable injuries but long-term followup is necessary.",
"title": ""
},
{
"docid": "neg:1840509_10",
"text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.",
"title": ""
},
{
"docid": "neg:1840509_11",
"text": "The centromere position is an important feature in analyzing chromosomes and to make karyogram. In the field of chromosome analysis the accurate determination centromere from the segmented chromosome image is a challenging task. Karyogram is an arrangement of 46 chromosomes, for finding out many genetic disorders, various abnormalities and cancers. There exist so many algorithms to detect centromere positions, but most of the algorithms cannot apply for all chromosomes because of their orientation in metaphase. Here we propose a novel algorithm that associates with some rules based on morphological features of chromosome, a GLM mask and rotation procedure. The algorithm is tested on publically available database (LK1) and images collected from RCC Trivandrum.",
"title": ""
},
{
"docid": "neg:1840509_12",
"text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840509_13",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "neg:1840509_14",
"text": "Today, bibliographic digital libraries play an important role in helping members of academic community search for novel research. In particular, author disambiguation for citations is a major problem during the data integration and cleaning process, since author names are usually very ambiguous. For solving this problem, we proposed two kinds of correlations between citations, namely, Topic Correlation and Web Correlation, to exploit relationships between citations, in order to identify whether two citations with the same author name refer to the same individual. The topic correlation measures the similarity between research topics of two citations; while the Web correlation measures the number of co-occurrence in web pages. We employ a pair-wise grouping algorithm to group citations into clusters. The results of experiments show that the disambiguation accuracy has great improvement when using topic correlation and Web correlation, and Web correlation provides stronger evidences about the authors of citations.",
"title": ""
},
{
"docid": "neg:1840509_15",
"text": "In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.",
"title": ""
},
{
"docid": "neg:1840509_16",
"text": "Optical testing of advanced CMOS circuits successfully exploits the near-infrared photon emission by hot-carriers in transistor channels (see EMMI (Ng et al., 1999) and PICA (Kash and Tsang, 1997) (Song et al., 2005) techniques). However, due to the continuous scaling of features size and supply voltage, spontaneous emission is becoming fainter and optical circuit diagnostics becomes more challenging. Here we present the experimental characterization of hot-carrier luminescence emitted by transistors in four CMOS technologies from two different manufacturers. Aim of the research is to gain a better perspective on emission trends and dependences on technological parameters. In particular, we identify luminescence changes due to short-channel effects (SCE) and we ascertain that, for each technology node, there are two operating regions, for short- and long-channels. We highlight the emission reduction of p-FETs compared to n-FETs, due to a \"red-shift\" (lower energy) of the hot-carrier distribution. Eventually, we give perspectives about emission trends in actual and future technology nodes, showing that luminescence dramatically decreases with voltage, but it recovers strength when moving from older to more advanced technology generations. Such results extend the applicability of optical testing techniques, based on present single-photon detectors, to future low-voltage chips",
"title": ""
},
{
"docid": "neg:1840509_17",
"text": "Currently, audience measurement reports of television programs are only available after a significant period of time, for example as a daily report. This paper proposes an architecture for real time measurement of television audience. Real time measurement can give channel owners and advertisers important information that can positively impact their business. We show that television viewership can be captured by set top box devices which detect the channel logo and transmit the viewership data to a server over internet. The server processes the viewership data and displays it in real time on a web-based dashboard. In addition, it has facility to display charts of hourly and location-wise viewership trends and online TRP (Television Rating Points) reports. The server infrastructure consists of in-memory database, reporting and charting libraries and J2EE based application server.",
"title": ""
},
{
"docid": "neg:1840509_18",
"text": "A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.",
"title": ""
},
{
"docid": "neg:1840509_19",
"text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.",
"title": ""
}
] |
1840510 | Influence Maximization Across Partially Aligned Heterogenous Social Networks | [
{
"docid": "pos:1840510_0",
"text": "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks.\n To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.",
"title": ""
},
{
"docid": "pos:1840510_1",
"text": "Kempe et al. [4] (KKT) showed the problem of influence maximization is NP-hard and a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, it has two major sources of inefficiency. First, finding the expected spread of a node set is #P-hard. Second, the basic greedy algorithm is quadratic in the number of nodes. The first source is tackled by estimating the spread using Monte Carlo simulation or by using heuristics[4, 6, 2, 5, 1, 3]. Leskovec et al. proposed the CELF algorithm for tackling the second. In this work, we propose CELF++ and empirically show that it is 35-55% faster than CELF.",
"title": ""
}
] | [
{
"docid": "neg:1840510_0",
"text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.",
"title": ""
},
{
"docid": "neg:1840510_1",
"text": "A 32 nm generation logic technology is described incorporating 2nd-generation high-k + metal-gate technology, 193 nm immersion lithography for critical patterning layers, and enhanced channel strain techniques. The transistors feature 9 Aring EOT high-k gate dielectric, dual band-edge workfunction metal gates, and 4th-generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. Process yield, performance and reliability are demonstrated on a 291 Mbit SRAM test vehicle, with 0.171 mum2 cell size, containing >1.9 billion transistors.",
"title": ""
},
{
"docid": "neg:1840510_2",
"text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.",
"title": ""
},
{
"docid": "neg:1840510_3",
"text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.",
"title": ""
},
{
"docid": "neg:1840510_4",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "neg:1840510_5",
"text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840510_6",
"text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.",
"title": ""
},
{
"docid": "neg:1840510_7",
"text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.",
"title": ""
},
{
"docid": "neg:1840510_8",
"text": "OBJECTIVES\nTo test whether individual differences in gratitude are related to sleep after controlling for neuroticism and other traits. To test whether pre-sleep cognitions are the mechanism underlying this relationship.\n\n\nMETHOD\nA cross-sectional questionnaire study was conducted with a large (186 males, 215 females) community sample (ages=18-68 years, mean=24.89, S.D.=9.02), including 161 people (40%) scoring above 5 on the Pittsburgh Sleep Quality Index, indicating clinically impaired sleep. Measures included gratitude, the Pittsburgh Sleep Quality Index (PSQI), self-statement test of pre-sleep cognitions, the Mini-IPIP scales of Big Five personality traits, and the Social Desirability Scale.\n\n\nRESULTS\nGratitude predicted greater subjective sleep quality and sleep duration, and less sleep latency and daytime dysfunction. The relationship between gratitude and each of the sleep variables was mediated by more positive pre-sleep cognitions and less negative pre-sleep cognitions. All of the results were independent of the effect of the Big Five personality traits (including neuroticism) and social desirability.\n\n\nCONCLUSION\nThis is the first study to show that a positive trait is related to good sleep quality above the effect of other personality traits, and to test whether pre-sleep cognitions are the mechanism underlying the relationship between any personality trait and sleep. The study is also the first to show that trait gratitude is related to sleep and to explain why this occurs, suggesting future directions for research, and novel clinical implications.",
"title": ""
},
{
"docid": "neg:1840510_9",
"text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.",
"title": ""
},
{
"docid": "neg:1840510_10",
"text": "With the advent of image and video representation of visual scenes in digital computer, subsequent necessity of vision-substitution representation of a given image is felt. The medium for non-visual representation of an image is chosen to be sound due to well developed auditory sensing ability of human beings and wide availability of cheap audio hardware. Visionary information of an image can be conveyed to blind and partially sighted persons through auditory representation of the image within some of the known limitations of human hearing system. The research regarding image sonification has mostly evolved through last three decades. The paper also discusses in brief about the reverse mapping, termed as sound visualization. This survey approaches to summarize the methodologies and issues of the implemented and unimplemented experimental systems developed for subjective sonification of image scenes and let researchers accumulate knowledge about the previous direction of researches in this domain.",
"title": ""
},
{
"docid": "neg:1840510_11",
"text": "In this paper, we discuss the emerging application of device-free localization (DFL) using wireless sensor networks, which find people and objects in the environment in which the network is deployed, even in buildings and through walls. These networks are termed “RF sensor networks” because the wireless network itself is the sensor, using radio-frequency (RF) signals to probe the deployment area. DFL in cluttered multipath environments has been shown to be feasible, and in fact benefits from rich multipath channels. We describe modalities of measurements made by RF sensors, the statistical models which relate a person's position to channel measurements, and describe research progress in this area.",
"title": ""
},
{
"docid": "neg:1840510_12",
"text": "The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix-<inline-formula><tex-math notation=\"LaTeX\">$4$ </tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq1-2493547.gif\"/></alternatives></inline-formula> (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- <inline-formula><tex-math notation=\"LaTeX\">$8$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq2-2493547.gif\"/></alternatives></inline-formula> Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq3-2493547.gif\"/></alternatives></inline-formula>-bit adder is deliberately designed for calculating the sum of <inline-formula><tex-math notation=\"LaTeX\">$1\\times$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq4-2493547.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$2\\times$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq5-2493547.gif\"/> </alternatives></inline-formula> of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq6-2493547.gif\"/></alternatives></inline-formula>-bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq7-2493547.gif\"/> </alternatives></inline-formula> bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.",
"title": ""
},
{
"docid": "neg:1840510_13",
"text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …",
"title": ""
},
{
"docid": "neg:1840510_14",
"text": "In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.",
"title": ""
},
{
"docid": "neg:1840510_15",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "neg:1840510_16",
"text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.",
"title": ""
},
{
"docid": "neg:1840510_17",
"text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.",
"title": ""
},
{
"docid": "neg:1840510_18",
"text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.",
"title": ""
},
{
"docid": "neg:1840510_19",
"text": "The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this tradeoff between speed and rate: the recent implementation [1] provides about 50% faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.",
"title": ""
}
] |
1840511 | BUP: A Bottom-Up parser embedded in Prolog | [
{
"docid": "pos:1840511_0",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
}
] | [
{
"docid": "neg:1840511_0",
"text": "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.",
"title": ""
},
{
"docid": "neg:1840511_1",
"text": "In this paper, we propose a method for extracting travelrelated event information, such as an event name or a schedule from automatically identified newspaper articles, in which particular events are mentioned. We analyze news corpora using our method, extracting venue names from them. We then find web pages that refer to event schedules for these venues. To confirm the effectiveness of our method, we conducted several experiments. From the experimental results, we obtained a precision of 91.5% and a recall of 75.9% for the automatic extraction of event information from news articles, and a precision of 90.8% and a recall of 52.8% for the automatic identification of eventrelated web pages.",
"title": ""
},
{
"docid": "neg:1840511_2",
"text": "We focus on the task of multi-hop reading comprehension where a system is required to reason over a chain of multiple facts, distributed across multiple passages, to answer a question. Inspired by graph-based reasoning, we present a path-based reasoning approach for textual reading comprehension. It operates by generating potential paths across multiple passages, extracting implicit relations along this path, and composing them to encode each path. The proposed model achieves a 2.3% gain on the WikiHop Dev set as compared to previous state-of-the-art and, as a side-effect, is also able to explain its reasoning through explicit paths of sentences.",
"title": ""
},
{
"docid": "neg:1840511_3",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "neg:1840511_4",
"text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.",
"title": ""
},
{
"docid": "neg:1840511_5",
"text": "To assure high quality of database applications, testing database applications remains the most popularly used approach. In testing database applications, tests consist of both program inputs and database states. Assessing the adequacy of tests allows targeted generation of new tests for improving their adequacy (e.g., fault-detection capabilities). Comparing to code coverage criteria, mutation testing has been a stronger criterion for assessing the adequacy of tests. Mutation testing would produce a set of mutants (each being the software under test systematically seeded with a small fault) and then measure how high percentage of these mutants are killed (i.e., detected) by the tests under assessment. However, existing test-generation approaches for database applications do not provide sufficient support for killing mutants in database applications (in either program code or its embedded or resulted SQL queries). To address such issues, in this paper, we propose an approach called MutaGen that conducts test generation for mutation testing on database applications. In our approach, we first apply an existing approach that correlates various constraints within a database application through constructing synthesized database interactions and transforming the constraints from SQL queries into normal program code. Based on the transformed code, we generate program-code mutants and SQL-query mutants, and then derive and incorporate query-mutant-killing constraints into the transformed code. Then, we generate tests to satisfy query-mutant-killing constraints. Evaluation results show that MutaGen can effectively kill mutants in database applications, and MutaGen outperforms existing test-generation approaches for database applications in terms of strong mutant killing.",
"title": ""
},
{
"docid": "neg:1840511_6",
"text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.",
"title": ""
},
{
"docid": "neg:1840511_7",
"text": "OBJECTIVE\nA wide spectrum of space-occupying soft-tissue lesions may be discovered on MRI studies, either as incidental findings or as palpable or symptomatic masses. Characterization of a lesion as benign or indeterminate is the most important step toward optimal treatment and avoidance of unnecessary biopsy or surgical intervention.\n\n\nCONCLUSION\nThe systemic MRI interpretation approach presented in this article enables the identification of cases in which sarcoma can be excluded.",
"title": ""
},
{
"docid": "neg:1840511_8",
"text": "Bi-directional LSTMs have emerged as a standard method for obtaining per-token vector representations serving as input to various token labeling tasks (whether followed by Viterbi prediction or independent classification). This paper proposes an alternative to Bi-LSTMs for this purpose: iterated dilated convolutional neural networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. We describe a distinct combination of network structure, parameter sharing and training procedures that is not only more accurate than Bi-LSTM-CRFs, but also 8x faster at test time on long sequences. Moreover, ID-CNNs with independent classification enable a dramatic 14x testtime speedup, while still attaining accuracy comparable to the Bi-LSTM-CRF. We further demonstrate the ability of IDCNNs to combine evidence over long sequences by demonstrating their improved accuracy on whole-document (rather than per-sentence) inference. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, IDCNNs permit fixed-depth convolutions to run in parallel across entire documents. Today when many companies run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.",
"title": ""
},
{
"docid": "neg:1840511_9",
"text": "The state-of-the-art object detection networks for natural images have recently demonstrated impressive performances. However the complexity of ship detection in high resolution satellite images exposes the limited capacity of these networks for strip-like rotated assembled object detection which are common in remote sensing images. In this paper, we embrace this observation and introduce the rotated region based CNN (RR-CNN), which can learn and accurately extract features of rotated regions and locate rotated objects precisely. RR-CNN has three important new components including a rotated region of interest (RRoI) pooling layer, a rotated bounding box regression model and a multi-task method for non-maximal suppression (NMS) between different classes. Experimental results on the public ship dataset HRSC2016 confirm that RR-CNN outperforms baselines by a large margin.",
"title": ""
},
{
"docid": "neg:1840511_10",
"text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.",
"title": ""
},
{
"docid": "neg:1840511_11",
"text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840511_12",
"text": "We propose a segmentation algorithm for the purposes of large-scale flower species recognition. Our approach is based on identifying potential object regions at the time of detection. We then apply a Laplacian-based segmentation, which is guided by these initially detected regions. More specifically, we show that 1) recognizing parts of the potential object helps the segmentation and makes it more robust to variabilities in both the background and the object appearances, 2) segmenting the object of interest at test time is beneficial for the subsequent recognition. Here we consider a large-scale dataset containing 578 flower species and 250,000 images. This dataset is developed by our team for the purposes of providing a flower recognition application for general use and is the largest in its scale and scope. We tested the proposed segmentation algorithm on the well-known 102 Oxford flowers benchmark [11] and on the new challenging large-scale 578 flower dataset, that we have collected. We observed about 4% improvements in the recognition performance on both datasets compared to the baseline. The algorithm also improves all other known results on the Oxford 102 flower benchmark dataset. Furthermore, our method is both simpler and faster than other related approaches, e.g. [3, 14], and can be potentially applicable to other subcategory recognition datasets.",
"title": ""
},
{
"docid": "neg:1840511_13",
"text": "A compact rectangular slotted monopole antenna for ultra wideband (UWB) application is presented. The designed antenna has a simple structure and compact size of 25 × 26 mm2. This antenna consist of radiating patch with two steps and one slot introduced on it for bandwidth enhancement and a ground plane. Antenna is feed with 50Ω microstrip line. IE3D method of moments based simulation software is used for design and FR4 substrate of dielectric constant value 4.4 with loss tangent 0.02.",
"title": ""
},
{
"docid": "neg:1840511_14",
"text": "2",
"title": ""
},
{
"docid": "neg:1840511_15",
"text": "For a multiuser data communications system operating over a mutually cross-coupled linear channel with additive noise sources, we determine the following: (1) a linear cross-coupled receiver processor (filter) that yields the least-mean-squared error between the desired outputs and the actual outputs, and (2) a cross-coupled transmitting filter that optimally distributes the total available power among the different users, as well as the total available frequency spectrum. The structure of the optimizing filters is similar to the known 2 × 2 case encountered in problems associated with digital transmission over dually polarized radio channels.",
"title": ""
},
{
"docid": "neg:1840511_16",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
},
{
"docid": "neg:1840511_17",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "neg:1840511_18",
"text": "Single document summarization is the task of producing a shorter version of a document while preserving its principal information content. In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective. We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans.1",
"title": ""
},
{
"docid": "neg:1840511_19",
"text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.",
"title": ""
}
] |
1840512 | Projective Feature Learning for 3D Shapes with Multi-View Depth Images | [
{
"docid": "pos:1840512_0",
"text": "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.",
"title": ""
}
] | [
{
"docid": "neg:1840512_0",
"text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.",
"title": ""
},
{
"docid": "neg:1840512_1",
"text": "Lawrence Kohlberg (1958) agreed with Piaget's (1932) theory of moral development in principle but wanted to develop his ideas further. He used Piaget’s storytelling technique to tell people stories involving moral dilemmas. In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual who is being unfairly treated. One of the best known of Kohlberg’s (1958) stories concerns a man called Heinz who lived somewhere in Europe. Heinz’s wife was dying from a particular type of cancer. Doctors said a new drug might save her. The drug had been discovered by a local chemist, and the Heinz tried desperately to buy some, but the chemist was charging ten times the money it cost to make the drug, and this was much more than the Heinz could afford. Heinz could only raise half the money, even after help from family and friends. He explained to the chemist that his wife was dying and asked if he could have the drug cheaper or pay the rest of the money later. The chemist refused, saying that he had discovered the drug and was going to make money from it. The husband was desperate to save his wife, so later that night he broke into the chemist’s and stole the drug.",
"title": ""
},
{
"docid": "neg:1840512_2",
"text": "Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy) were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM) classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.",
"title": ""
},
{
"docid": "neg:1840512_3",
"text": "According to AV vendors malicious software has been growing exponentially last years. One of the main reasons for these high volumes is that in order to evade detection, malware authors started using polymorphic and metamorphic techniques. As a result, traditional signature-based approaches to detect malware are being insufficient against new malware and the categorization of malware samples had become essential to know the basis of the behavior of malware and to fight back cybercriminals. During the last decade, solutions that fight against malicious software had begun using machine learning approaches. Unfortunately, there are few opensource datasets available for the academic community. One of the biggest datasets available was released last year in a competition hosted on Kaggle with data provided by Microsoft for the Big Data Innovators Gathering (BIG 2015). This thesis presents two novel and scalable approaches using Convolutional Neural Networks (CNNs) to assign malware to its corresponding family. On one hand, the first approach makes use of CNNs to learn a feature hierarchy to discriminate among samples of malware represented as gray-scale images. On the other hand, the second approach uses the CNN architecture introduced by Yoon Kim [12] to classify malware samples according their x86 instructions. The proposed methods achieved an improvement of 93.86% and 98,56% with respect to the equal probability benchmark.",
"title": ""
},
{
"docid": "neg:1840512_4",
"text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.",
"title": ""
},
{
"docid": "neg:1840512_5",
"text": "We review Boltzmann machines extended for time-series. These models often have recurrent structure, and back propagration through time (BPTT) is used to learn their parameters. The perstep computational complexity of BPTT in online learning, however, grows linearly with respect to the length of preceding time-series (i.e., learning rule is not local in time), which limits the applicability of BPTT in online learning. We then review dynamic Boltzmann machines (DyBMs), whose learning rule is local in time. DyBM’s learning rule relates to spike-timing dependent plasticity (STDP), which has been postulated and experimentally confirmed for biological neural networks.",
"title": ""
},
{
"docid": "neg:1840512_6",
"text": "BACKGROUND\nDiphallia is a very rare anomaly and seen once in every 5.5 million live births. True diphallia with normal penile structures is extremely rare. Surgical management for patients with complete penile duplication without any penile or urethral pathology is challenging.\n\n\nCASE REPORT\nA 4-year-old boy presented with diphallia. Initial physical examination revealed first physical examination revealed complete penile duplication, urine flow from both penises, meconium flow from right urethra, and anal atresia. Further evaluations showed double colon and rectum, double bladder, and large recto-vesical fistula. Two cavernous bodies and one spongious body were detected in each penile body. Surgical treatment plan consisted of right total penectomy and end-to-side urethra-urethrostomy. No postoperative complications and no voiding dysfunction were detected during the 18 months follow-up.\n\n\nCONCLUSION\nPenile duplication is a rare anomaly, which presents differently in each patient. Because of this, the treatment should be individualized and end-to-side urethra-urethrostomy may be an alternative to removing posterior urethra. This approach eliminates the risk of damaging prostate gland and sphincter.",
"title": ""
},
{
"docid": "neg:1840512_7",
"text": "In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.",
"title": ""
},
{
"docid": "neg:1840512_8",
"text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.",
"title": ""
},
{
"docid": "neg:1840512_9",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "neg:1840512_10",
"text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.",
"title": ""
},
{
"docid": "neg:1840512_11",
"text": "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forwardand back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.",
"title": ""
},
{
"docid": "neg:1840512_12",
"text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.",
"title": ""
},
{
"docid": "neg:1840512_13",
"text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.",
"title": ""
},
{
"docid": "neg:1840512_14",
"text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.",
"title": ""
},
{
"docid": "neg:1840512_15",
"text": "Cloud federations are a new collaboration paradigm where organizations share data across their private cloud infrastructures. However, the adoption of cloud federations is hindered by federated organizations' concerns on potential risks of data leakage and data misuse. For cloud federations to be viable, federated organizations' privacy concerns should be alleviated by providing mechanisms that allow organizations to control which users from other federated organizations can access which data. We propose a novel identity and access management system for cloud federations. The system allows federated organizations to enforce attribute-based access control policies on their data in a privacy-preserving fashion. Users are granted access to federated data when their identity attributes match the policies, but without revealing their attributes to the federated organization owning data. The system also guarantees the integrity of the policy evaluation process by using block chain technology and Intel SGX trusted hardware. It uses block chain to ensure that users identity attributes and access control policies cannot be modified by a malicious user, while Intel SGX protects the integrity and confidentiality of the policy enforcement process. We present the access control protocol, the system architecture and discuss future extensions.",
"title": ""
},
{
"docid": "neg:1840512_16",
"text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling",
"title": ""
},
{
"docid": "neg:1840512_17",
"text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.",
"title": ""
},
{
"docid": "neg:1840512_18",
"text": "Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms that are jointly adversarially trained with the generators and discriminators. We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. Input Ours CycleGAN [1] RA [2] DiscoGAN [3] UNIT [4] DualGAN [5] Figure 1: By explicitly modeling attention, our algorithm is able to better alter the object of interest in unsupervised image-to-image translation tasks, without changing the background at the same time.",
"title": ""
},
{
"docid": "neg:1840512_19",
"text": "Analyses of 3-D seismic data in predominantly basin-floor settings offshore Indonesia, Nigeria, and the Gulf of Mexico, reveal the extensive presence of gravity-flow depositional elements. Five key elements were observed: (1) turbidity-flow leveed channels, (2) channeloverbank sediment waves and levees, (3) frontal splays or distributarychannel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets. Each depositional element displays a unique morphology and seismic expression. The reservoir architecture of each of these depositional elements is a function of the interaction between sedimentary process, sea-floor morphology, and sediment grain-size distribution. (1) Turbidity-flow leveed-channel widths range from greater than 3 km to less than 200 m. Sinuosity ranges from moderate to high, and channel meanders in most instances migrate down-system. The highamplitude reflection character that commonly characterizes these features suggests the presence of sand within the channels. In some instances, high-sinuosity channels are associated with (2) channel-overbank sediment-wave development in proximal overbank levee settings, especially in association with outer channel bends. These sediment waves reach heights of 20 m and spacings of 2–3 km. The crests of these sediment waves are oriented normal to the inferred transport direction of turbidity flows, and the waves have migrated in an upflow direction. Channel-margin levee thickness decreases systematically down-system. Where levee thickness can no longer be resolved seismically, high-sinuosity channels feed (3) frontal splays or low-sinuosity, distributary-channel complexes. Low-sinuosity distributary-channel complexes are expressed as lobate sheets up to 5–10 km wide and tens of kilometers long that extend to the distal edges of these systems. They likely comprise sheet-like sandstone units consisting of shallow channelized and associated sand-rich overbank deposits. Also observed are (4) crevasse-splay deposits, which form as a result of the breaching of levees, commonly at channel bends. Similar to frontal splays, but smaller in size, these deposits commonly are characterized by sheet-like turbidites. (5) Debris-flow deposits comprise low-sinuosity channel fills, narrow elongate lobes, and sheets and are characterized seismically by contorted, chaotic, low-amplitude reflection patterns. These deposits commonly overlie striated or grooved pavements that can be up to tens of kilometers long, 15 m deep, and 25 m wide. Where flows are unconfined, striation patterns suggest that divergent flow is common. Debris-flow deposits extend as far basinward as turbidites, and individual debris-flow units can reach 80 m in thickness and commonly are marked by steep edges. Transparent to chaotic seismic reflection character suggest that these deposits are mud-rich. Stratigraphically, deep-water basin-floor successions commonly are characterized by mass-transport deposits at the base, overlain by turbidite frontal-splay deposits and subsequently by leveed-channel deposits. Capping this succession is another mass-transport unit ultimately overlain and draped by condensed-section deposits. This succession can be related to a cycle of relative sea-level change and associated events at the corresponding shelf edge. Commonly, deposition of a deep-water sequence is initiated with the onset of relative sea-level fall and ends with subsequent rapid relative sea-level rise. INTRODUCTION The understanding of deep-water depositional systems has advanced significantly in recent years. In the past, much understanding of deep-water sedimentation came from studies of outcrops, recent fan systems, and 2D reflection seismic data (Bouma 1962; Mutti and Ricci Lucchi 1972; Normark 1970, 1978; Walker 1978; Posamentier et al. 1991; Weimer 1991; Mutti and Normark 1991). However, in recent years this knowledge has advanced significantly because of (1) the interest by petroleum companies in deep-water exploration (e.g., Pirmez et al. 2000), and the advent of widely available high-quality 3D seismic data across a broad range of deepwater environments (e.g., Beaubouef and Friedman 2000; Posamentier et al. 2000), (2) the recent drilling and coring of both near-surface and reservoir-level deep-water systems (e.g., Twichell et al. 1992), and (3) the increasing utilization of deep-tow side-scan sonar and other imaging devices (e.g., Twichell et al. 1992; Kenyon and Millington 1995). It is arguably the first factor that has had the most significant impact on our understanding of deep-water systems. Three-dimensional seismic data afford an unparalleled view of the deep-water depositional environment, in some instances with vertical resolution down to 2–3 m. Seismic time slices, horizon-datum time slices, and interval attributes provide images of deepwater depositional systems in map view that can then be analyzed from a geomorphologic perspective. Geomorphologic analyses lead to the identification of depositional elements, which, when integrated with seismic profiles, can yield significant stratigraphic insight. Finally, calibration by correlation with borehole data, including logs, conventional core, and biostratigraphic samples, can provide the interpreter with an improved understanding of the geology of deep-water systems. The focus of this study is the deep-water component of a depositional sequence. We describe and discuss only those elements and stratigraphic successions that are present in deep-water depositional environments. The examples shown in this study largely are Pleistocene in age and most are encountered within the uppermost 400 m of substrate. These relatively shallowly buried features represent the full range of lowstand deep-water depositional sequences from early and late lowstand through transgressive and highstand deposits. Because they are not buried deeply, these stratigraphic units commonly are well-imaged on 3D seismic data. It is also noteworthy that although the examples shown here largely are of Pleistocene age, the age of these deposits should not play a significant role in subsequent discussion. What determines the architecture of deep-water deposits are the controlling parameters of flow discharge, sand-to-mud ratio, slope length, slope gradient, and rugosity of the seafloor, and not the age of the deposits. It does not matter whether these deposits are Pleistocene, Carboniferous, or Precambrian; the physical ‘‘first principles’’ of sediment gravity flow apply without distinguishing between when these deposits formed. However, from the perspective of studying deep-water turbidites it is advantageous that the Pleistocene was such an active time in the deepwater environment, resulting in deposition of numerous shallowly buried, well-imaged, deep-water systems. Depositional Elements Approach This study is based on the grouping of similar geomorphic features referred to as depositional elements. Depositional elements are defined by 368 H.W. POSAMENTIER AND V. KOLLA FIG. 1.—Schematic depiction of principal depositional elements in deep-water settings. Mutti and Normark (1991) as the basic mappable components of both modern and ancient turbidite systems and stages that can be recognized in marine, outcrop, and subsurface studies. These features are the building blocks of landscapes. The focus of this study is to use 3D seismic data to characterize the geomorphology and stratigraphy of deep-water depositional elements and infer process of deposition where appropriate. Depositional elements can vary from place to place and in the same place through time with changes of environmental parameters such as sand-to-mud ratio, flow discharge, and slope gradient. In some instances, systematic changes in these environmental parameters can be tied back to changes of relative sea level. The following depositional elements will be discussed: (1) turbidityflow leveed channels, (2) overbank sediment waves and levees, (3) frontal splays or distributary-channel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets (Fig. 1). Each element is described and depositional processes are discussed. Finally, the exploration significance of each depositional element is reviewed. Examples are drawn from three deep-water slope and basin-floor settings: the Gulf of Mexico, offshore Nigeria, and offshore eastern Kalimantan, Indonesia. We utilized various visualization techniques, including 3D perspective views, horizon slices, and horizon and interval attribute displays, to bring out the detailed characteristics of depositional elements and their respective geologic settings. The deep-water depositional elements we present here are commonly characterized by peak seismic frequencies in excess of 100 Hz. The vertical resolution at these shallow depths of burial is in the range of 3–4 m, thus affording high-resolution images of depositional elements. We hope that our study, based on observations from the shallow subsurface, will provide general insights into the reservoir architecture of deep-water depositional elements, which can be extrapolated to more poorly resolved deep-water systems encountered at deeper exploration depths. DEPOSITIONAL ELEMENTS The following discussion focuses on five depositional elements in deepwater environments. These include turbidity-flow leveed channels, overbank or levee deposits, frontal splays or distributary-channel complexes, crevasse splays, and debris-flow sheets, lobes, and channels (Fig. 1). Turbidity-Flow Leveed Channels Leveed channels are common depositional elements in slope and basinfloor environments. Leveed channels observed in this study range in width from 3 km to less than 250 m and in sinuosity (i.e., the ratio of channelaxis length to channel-belt length) between 1.2 and 2.2. Some leveed channels are internally characterized by complex cut-and-fill architecture. Many leveed channels show evidence",
"title": ""
}
] |
Subsets and Splits