query_id
stringlengths 1
6
| query
stringlengths 2
185
| positive_passages
listlengths 1
121
| negative_passages
listlengths 15
100
|
---|---|---|---|
1840113 | Impacts of implementing Enterprise Content Management Systems | [
{
"docid": "pos:1840113_0",
"text": "University of Jyväskylä, Department of Computer Science and Information Systems, PO Box 35, FIN-40014, Finland; Agder University College, Department of Information Systems, PO Box 422, 4604, Kristiansand, Norway; University of Toronto, Faculty of Information Studies, 140 St. George Street, Toronto, ON M5S 3G6, Canada; University of Oulu, Department of Information Processing Science, University of Oulu, PO Box 3000, FIN-90014, Finland Abstract Innovations in network technologies in the 1990’s have provided new ways to store and organize information to be shared by people and various information systems. The term Enterprise Content Management (ECM) has been widely adopted by software product vendors and practitioners to refer to technologies used to manage the content of assets like documents, web sites, intranets, and extranets In organizational or inter-organizational contexts. Despite this practical interest ECM has received only little attention in the information systems research community. This editorial argues that ECM provides an important and complex subfield of Information Systems. It provides a framework to stimulate and guide future research, and outlines research issues specific to the field of ECM. European Journal of Information Systems (2006) 15, 627–634. doi:10.1057/palgrave.ejis.3000648",
"title": ""
}
] | [
{
"docid": "neg:1840113_0",
"text": "Disclaimer The opinions and positions expressed in this practice guide are the authors' and do not necessarily represent the opinions and positions of the Institute of Education Sciences or the U.S. Department of Education. This practice guide should be reviewed and applied according to the specific needs of the educators and education agencies using it and with full realization that it represents only one approach that might be taken, based on the research that was available at the time of publication. This practice guide should be used as a tool to assist in decision-making rather than as a \" cookbook. \" Any references within the document to specific education products are illustrative and do not imply endorsement of these products to the exclusion of other products that are not referenced. Alternative Formats On request, this publication can be made available in alternative formats, such as Braille, large print, audiotape, or computer diskette. For more information, call the Alternative Format Center at (202) 205-8113.",
"title": ""
},
{
"docid": "neg:1840113_1",
"text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.",
"title": ""
},
{
"docid": "neg:1840113_2",
"text": "We extend our methods from [24] to reprove the Local Langlands Correspondence for GLn over p-adic fields as well as the existence of `-adic Galois representations attached to (most) regular algebraic conjugate self-dual cuspidal automorphic representations, for which we prove a local-global compatibility statement as in the book of Harris-Taylor, [10]. In contrast to the proofs of the Local Langlands Correspondence given by Henniart, [13], and Harris-Taylor, [10], our proof completely by-passes the numerical Local Langlands Correspondence of Henniart, [11]. Instead, we make use of a previous result from [24] describing the inertia-invariant nearby cycles in certain regular situations.",
"title": ""
},
{
"docid": "neg:1840113_3",
"text": "Turbo generator with evaporative cooling stator and air cooling rotor possesses many excellent qualities for mid unit. The stator bars and core are immerged in evaporative coolant, which could be cooled fully. The rotor bars are cooled by air inner cooling mode, and the cooling effect compared with hydrogen and water cooling mode is limited. So an effective ventilation system has to been employed to insure the reliability of rotor. This paper presents the comparisons of stator temperature distribution between evaporative cooling mode and air cooling mode, and the designing of rotor ventilation system combined with evaporative cooling stator.",
"title": ""
},
{
"docid": "neg:1840113_4",
"text": "Online reviews provide valuable information about products and services to consumers. However, spammers are joining the community trying to mislead readers by writing fake reviews. Previous attempts for spammer detection used reviewers' behaviors, text similarity, linguistics features and rating patterns. Those studies are able to identify certain types of spammers, e.g., those who post many similar reviews about one target entity. However, in reality, there are other kinds of spammers who can manipulate their behaviors to act just like genuine reviewers, and thus cannot be detected by the available techniques. In this paper, we propose a novel concept of a heterogeneous review graph to capture the relationships among reviewers, reviews and stores that the reviewers have reviewed. We explore how interactions between nodes in this graph can reveal the cause of spam and propose an iterative model to identify suspicious reviewers. This is the first time such intricate relationships have been identified for review spam detection. We also develop an effective computation method to quantify the trustiness of reviewers, the honesty of reviews, and the reliability of stores. Different from existing approaches, we don't use review text information. Our model is thus complementary to existing approaches and able to find more difficult and subtle spamming activities, which are agreed upon by human judges after they evaluate our results.",
"title": ""
},
{
"docid": "neg:1840113_5",
"text": "Boxer is a semantic parser for English texts with many input and output possibilities, and various ways to perform meaning analysis based on Discourse Representation Theory. This involves the various ways that meaning representations can be computed, as well as their possible semantic ingredients.",
"title": ""
},
{
"docid": "neg:1840113_6",
"text": "Weakly-supervised semantic image segmentation suffers from lacking accurate pixel-level annotations. In this paper, we propose a novel graph convolutional network-based method, called GraphNet, to learn pixel-wise labels from weak annotations. Firstly, we construct a graph on the superpixels of a training image by combining the low-level spatial relation and high-level semantic content. Meanwhile, scribble or bounding box annotations are embedded into the graph, respectively. Then, GraphNet takes the graph as input and learns to predict high-confidence pseudo image masks by a convolutional network operating directly on graphs. At last, a segmentation network is trained supervised by these pseudo image masks. We comprehensively conduct experiments on the PASCAL VOC 2012 and PASCAL-CONTEXT segmentation benchmarks. Experimental results demonstrate that GraphNet is effective to predict the pixel labels with scribble or bounding box annotations. The proposed framework yields state-of-the-art results in the community.",
"title": ""
},
{
"docid": "neg:1840113_7",
"text": "Amyotrophic lateral sclerosis (ALS) is a devastating neurodegenerative disorder characterized by death of motor neurons leading to muscle wasting, paralysis, and death, usually within 2-3 years of symptom onset. The causes of ALS are not completely understood, and the neurodegenerative processes involved in disease progression are diverse and complex. There is substantial evidence implicating oxidative stress as a central mechanism by which motor neuron death occurs, including elevated markers of oxidative damage in ALS patient spinal cord and cerebrospinal fluid and mutations in the antioxidant enzyme superoxide dismutase 1 (SOD1) causing approximately 20% of familial ALS cases. However, the precise mechanism(s) by which mutant SOD1 leads to motor neuron degeneration has not been defined with certainty, and the ultimate trigger for increased oxidative stress in non-SOD1 cases remains unclear. Although some antioxidants have shown potential beneficial effects in animal models, human clinical trials of antioxidant therapies have so far been disappointing. Here, the evidence implicating oxidative stress in ALS pathogenesis is reviewed, along with how oxidative damage triggers or exacerbates other neurodegenerative processes, and we review the trials of a variety of antioxidants as potential therapies for ALS.",
"title": ""
},
{
"docid": "neg:1840113_8",
"text": "The advent of technology in the 1990s was seen as having the potential to revolutionise electronic management of student assignments. While there were advantages and disadvantages, the potential was seen as a necessary part of the future of this aspect of academia. A number of studies (including Dalgarno et al in 2006) identified issues that supported positive aspects of electronic assignment management but consistently identified drawbacks, suggesting that the maximum achievable potential for these processes may have been reached. To confirm the perception that the technology and process are indeed ‘marking time’ a further study was undertaken at the University of South Australia (UniSA). This paper deals with the study of online receipt, assessment and feedback of assessment utilizing UniSA technology referred to as AssignIT. The study identified that students prefer a paperless approach to marking however there are concerns with the nature, timing and quality of feedback. Staff have not embraced all of the potential elements of electronic management of assignments, identified Occupational Health Safety and Welfare issues, and tended to drift back to traditional manual marking processes through a lack of understanding or confidence in their ability to properly use the technology.",
"title": ""
},
{
"docid": "neg:1840113_9",
"text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.",
"title": ""
},
{
"docid": "neg:1840113_10",
"text": "Cutaneous melanoma may in some instances be confused with seborrheic keratosis, which is a very common neoplasia, more often mistaken for actinic keratosis and verruca vulgaris. Melanoma may clinically resemble seborrheic keratosis and should be considered as its possible clinical simulator. We report a case of melanoma with dermatoscopic characteristics of seborrheic keratosis and emphasize the importance of the dermatoscopy algorithm in differentiating between a melanocytic and a non-melanocytic lesion, of the excisional biopsy for the establishment of the diagnosis of cutaneous tumors, and of the histopathologic examination in all surgically removed samples.",
"title": ""
},
{
"docid": "neg:1840113_11",
"text": "Location management refers to the problem of updating and searching the current location of mobile nodes in a wireless network. To make it efficient, the sum of update costs of location database must be minimized. Previous work relying on fixed location databases is unable to fully exploit the knowledge of user mobility patterns in the system so as to achieve this minimization. The study presents an intelligent location management approach which has interacts between intelligent information system and knowledge-base technologies, so we can dynamically change the user patterns and reduce the transition between the VLR and HLR. The study provides algorithms are ability to handle location registration and call delivery.",
"title": ""
},
{
"docid": "neg:1840113_12",
"text": "Sensor networks offer a powerful combination of distributed sensing, computing and communication. They lend themselves to countless applications and, at the same time, offer numerous challenges due to their peculiarities, primarily the stringent energy constraints to which sensing nodes are typically subjected. The distinguishing traits of sensor networks have a direct impact on the hardware design of the nodes at at least four levels: power source, processor, communication hardware, and sensors. Various hardware platforms have already been designed to test the many ideas spawned by the research community and to implement applications to virtually all fields of science and technology. We are convinced that CAS will be able to provide a substantial contribution to the development of this exciting field.",
"title": ""
},
{
"docid": "neg:1840113_13",
"text": "Although problem solving is regarded by most educators as among the most important learning outcomes, few instructional design prescriptions are available for designing problem-solving instruction and engaging learners. This paper distinguishes between well-structured problems and ill-structured problems. Well-structured problems are constrained problems with convergent solutions that engage the application of a limited number of rules and principles within welldefined parameters. Ill-structured problems possess multiple solutions, solution paths, fewer parameters which are less manipulable, and contain uncertainty about which concepts, rules, and principles are necessary for the solution or how they are organized and which solution is best. For both types of problems, this paper presents models for how learners solve them and models for designing instruction to support problem-solving skill development. The model for solving wellstructured problems is based on information processing theories of learning, while the model for solving ill-structured problems relies on an emerging theory of ill-structured problem solving and on constructivist and situated cognition approaches to learning. PROBLEM: INSTRUCTIONAL-DESIGN MODELS FOR PROBLEM SOLVING",
"title": ""
},
{
"docid": "neg:1840113_14",
"text": "We introduce a novel method for describing and controlling a 3D smoke simulation. Using harmonic analysis and principal component analysis, we define an underlying description of the fluid flow that is compact and meaningful to non-expert users. The motion of the smoke can be modified with high level tools, such as animated current curves, attractors and tornadoes. Our simulation is controllable, interactive and stable for arbitrarily long periods of time. The simulation's computational cost increases linearly in the number of motion samples and smoke particles. Our adaptive smoke particle representation conveniently incorporates the surface-like characteristics of real smoke.",
"title": ""
},
{
"docid": "neg:1840113_15",
"text": "Microblogging websites such as twitter and Sina Weibo have attracted many users to share their experiences and express their opinions on a variety of topics. Sentiment classification of microblogging texts is of great significance in analyzing users' opinion on products, persons and hot topics. However, conventional bag-of-words-based sentiment classification methods may meet some problems in processing Chinese microblogging texts because they does not consider semantic meanings of texts. In this paper, we proposed a global RNN-based sentiment method, which use the outputs of all the time-steps as features to extract the global information of texts, for sentiment classification of Chinese microblogging texts and explored different RNN-models. The experiments on two Chinese microblogging datasets show that the proposed method achieves better performance than conventional bag-of-words-based methods.",
"title": ""
},
{
"docid": "neg:1840113_16",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "neg:1840113_17",
"text": "To bring down the number of traffic accidents and increase people’s mobility companies, such as Robot Engineering Systems (RES) try to put automated vehicles on the road. RES is developing the WEpod, a shuttle capable of autonomously navigating through mixed traffic. This research has been done in cooperation with RES to improve the localization capabilities of the WEpod. The WEpod currently localizes using its GPS and lidar sensors. These have proven to be not accurate and reliable enough to safely navigate through traffic. Therefore, other methods of localization and mapping have been investigated. The primary method investigated in this research is monocular Simultaneous Localization and Mapping (SLAM). Based on literature and practical studies, ORB-SLAM has been chosen as the implementation of SLAM. Unfortunately, ORB-SLAM is unable to initialize the setup when applied on WEpod images. Literature has shown that this problem can be solved by adding depth information to the inputs of ORB-SLAM. Obtaining depth information for the WEpod images is not an arbitrary task. The sensors on the WEpod are not capable of creating the required dense depth-maps. A Convolutional Neural Network (CNN) could be used to create the depth-maps. This research investigates whether adding a depth-estimating CNN solves this initialization problem and increases the tracking accuracy of monocular ORB-SLAM. A well performing CNN is chosen and combined with ORB-SLAM. Images pass through the depth estimating CNN to obtain depth-maps. These depth-maps together with the original images are used in ORB-SLAM, keeping the whole setup monocular. ORB-SLAM with the CNN is first tested on the Kitti dataset. The Kitti dataset is used since monocular ORBSLAM initializes on Kitti images and ground-truth depth-maps can be obtained for Kitti images. Monocular ORB-SLAM’s tracking accuracy has been compared to ORB-SLAM with ground-truth depth-maps and to ORB-SLAM with estimated depth-maps. This comparison shows that adding estimated depth-maps increases the tracking accuracy of ORB-SLAM, but not as much as the ground-truth depth images. The same setup is tested on WEpod images. The CNN is fine-tuned on 7481 Kitti images as well as on 642 WEpod images. The performance on WEpod images of both CNN versions are compared, and used in combination with ORB-SLAM. The CNN fine-tuned on the WEpod images does not perform well, missing details in the estimated depth-maps. However, this is enough to solve the initialization problem of ORB-SLAM. The combination of ORB-SLAM and the Kitti fine-tuned CNN has a better tracking accuracy than ORB-SLAM with the WEpod fine-tuned CNN. It has been shown that the initialization problem on WEpod images is solved as well as the tracking accuracy is increased. These results show that the initialization problem of monocular ORB-SLAM on WEpod images is solved by adding the CNN. This makes it applicable to improve the current localization methods on the WEpod. Using only this setup for localization on the WEpod is not possible yet, more research is necessary. Adding this setup to the current localization methods of the WEpod could increase the localization of the WEpod. This would make it safer for the WEpod to navigate through traffic. This research sets the next step into creating a fully autonomous vehicle which reduces traffic accidents and increases the mobility of people.",
"title": ""
},
{
"docid": "neg:1840113_18",
"text": "Both translation arrest and proteasome stress associated with accumulation of ubiquitin-conjugated protein aggregates were considered as a cause of delayed neuronal death after transient global brain ischemia; however, exact mechanisms as well as possible relationships are not fully understood. The aim of this study was to compare the effect of chemical ischemia and proteasome stress on cellular stress responses and viability of neuroblastoma SH-SY5Y and glioblastoma T98G cells. Chemical ischemia was induced by transient treatment of the cells with sodium azide in combination with 2-deoxyglucose. Proteasome stress was induced by treatment of the cells with bortezomib. Treatment of SH-SY5Y cells with sodium azide/2-deoxyglucose for 15 min was associated with cell death observed 24 h after treatment, while glioblastoma T98G cells were resistant to the same treatment. Treatment of both SH-SY5Y and T98G cells with bortezomib was associated with cell death, accumulation of ubiquitin-conjugated proteins, and increased expression of Hsp70. These typical cellular responses to proteasome stress, observed also after transient global brain ischemia, were not observed after chemical ischemia. Finally, chemical ischemia, but not proteasome stress, was in SH-SY5Y cells associated with increased phosphorylation of eIF2α, another typical cellular response triggered after transient global brain ischemia. Our results showed that short chemical ischemia of SH-SY5Y cells is not sufficient to induce both proteasome stress associated with accumulation of ubiquitin-conjugated proteins and stress response at the level of heat shock proteins despite induction of cell death and eIF2α phosphorylation.",
"title": ""
},
{
"docid": "neg:1840113_19",
"text": "This paper presents a summary of the available single-phase ac-dc topologies used for EV/PHEV, level-1 and -2 on-board charging and for providing reactive power support to the utility grid. It presents the design motives of single-phase on-board chargers in detail and makes a classification of the chargers based on their future vehicle-to-grid usage. The pros and cons of each different ac-dc topology are discussed to shed light on their suitability for reactive power support. This paper also presents and analyzes the differences between charging-only operation and capacitive reactive power operation that results in increased demand from the dc-link capacitor (more charge/discharge cycles and increased second harmonic ripple current). Moreover, battery state of charge is spared from losses during reactive power operation, but converter output power must be limited below its rated power rating to have the same stress on the dc-link capacitor.",
"title": ""
}
] |
1840114 | Fundamental movement skills in children and adolescents: review of associated health benefits. | [
{
"docid": "pos:1840114_0",
"text": "PURPOSE\nCross-sectional evidence has demonstrated the importance of motor skill proficiency to physical activity participation, but it is unknown whether skill proficiency predicts subsequent physical activity.\n\n\nMETHODS\nIn 2000, children's proficiency in object control (kick, catch, throw) and locomotor (hop, side gallop, vertical jump) skills were assessed in a school intervention. In 2006/07, the physical activity of former participants was assessed using the Australian Physical Activity Recall Questionnaire. Linear regressions examined relationships between the reported time adolescents spent participating in moderate-to-vigorous or organized physical activity and their childhood skill proficiency, controlling for gender and school grade. A logistic regression examined the probability of participating in vigorous activity.\n\n\nRESULTS\nOf 481 original participants located, 297 (62%) consented and 276 (57%) were surveyed. All were in secondary school with females comprising 52% (144). Adolescent time in moderate-to-vigorous and organized activity was positively associated with childhood object control proficiency. Respective models accounted for 12.7% (p = .001), and 18.2% of the variation (p = .003). Object control proficient children became adolescents with a 10% to 20% higher chance of vigorous activity participation.\n\n\nCONCLUSIONS\nObject control proficient children were more likely to become active adolescents. Motor skill development should be a key strategy in childhood interventions aiming to promote long-term physical activity.",
"title": ""
}
] | [
{
"docid": "neg:1840114_0",
"text": "We develop a framework for rendering photographic images by directly optimizing their perceptual similarity to the original visual scene. Specifically, over the set of all images that can be rendered on a given display, we minimize the normalized Laplacian pyramid distance (NLPD), a measure of perceptual dissimilarity that is derived from a simple model of the early stages of the human visual system. When rendering images acquired with a higher dynamic range than that of the display, we find that the optimization boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the-art methods, but without manual intervention or parameter adjustment. We also demonstrate the effectiveness of the framework for a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). We show that the method may generally be used to enhance details and contrast, and, in particular, can be used on images degraded by optical scattering (e.g., fog). Finally, we demonstrate the necessity of each of the NLPD components-an initial power function, a multiscale transform, and local contrast gain control-in achieving these results and we show that NLPD is competitive with the current state-of-the-art image quality metrics.",
"title": ""
},
{
"docid": "neg:1840114_1",
"text": "Children's experiences in early childhood have significant lasting effects in their overall development and in the United States today the majority of young children spend considerable amounts of time in early childhood education settings. At the national level, there is an expressed concern about the low levels of student interest and success in science, technology, engineering, and mathematics (STEM). Bringing these two conversations together our research focuses on how young children of preschool age exhibit behaviors that we consider relevant in engineering. There is much to be explored in STEM education at such an early age, and in order to proceed we created an experimental observation protocol in which we identified various pre-engineering behaviors based on pilot observations, related literature and expert knowledge. This protocol is intended for use by preschool teachers and other professionals interested in studying engineering in the preschool classroom.",
"title": ""
},
{
"docid": "neg:1840114_2",
"text": "More than 75% of hospital-acquired or nosocomial urinary tract infections are initiated by urinary catheters, which are used during the treatment of 15-25% of hospitalized patients. Among other purposes, urinary catheters are primarily used for draining urine after surgeries and for urinary incontinence. During catheter-associated urinary tract infections, bacteria travel up to the bladder and cause infection. A major cause of catheter-associated urinary tract infection is attributed to the use of non-ideal materials in the fabrication of urinary catheters. Such materials allow for the colonization of microorganisms, leading to bacteriuria and infection, depending on the severity of symptoms. The ideal urinary catheter is made out of materials that are biocompatible, antimicrobial, and antifouling. Although an abundance of research has been conducted over the last forty-five years on the subject, the ideal biomaterial, especially for long-term catheterization of more than a month, has yet to be developed. The aim of this review is to highlight the recent advances (over the past 10years) in developing antimicrobial materials for urinary catheters and to outline future requirements and prospects that guide catheter materials selection and design.\n\n\nSTATEMENT OF SIGNIFICANCE\nThis review article intends to provide an expansive insight into the various antimicrobial agents currently being researched for urinary catheter coatings. According to CDC, approximately 75% of urinary tract infections are caused by urinary catheters and 15-25% of hospitalized patients undergo catheterization. In addition to these alarming statistics, the increasing cost and health related complications associated with catheter associated UTIs make the research for antimicrobial urinary catheter coatings even more pertinent. This review provides a comprehensive summary of the history, the latest progress in development of the coatings and a brief conjecture on what the future entails for each of the antimicrobial agents discussed.",
"title": ""
},
{
"docid": "neg:1840114_3",
"text": "BACKGROUND\nThe aims of this study were to identify the independent factors associated with intermittent addiction and addiction to the Internet and to examine the psychiatric symptoms in Korean adolescents when the demographic and Internet-related factors were controlled.\n\n\nMETHODS\nMale and female students (N = 912) in the 7th-12th grades were recruited from 2 junior high schools and 2 academic senior high schools located in Seoul, South Korea. Data were collected from November to December 2004 using the Internet-Related Addiction Scale and the Symptom Checklist-90-Revision. A total of 851 subjects were analyzed after excluding the subjects who provided incomplete data.\n\n\nRESULTS\nApproximately 30% (n = 258) and 4.3% (n = 37) of subjects showed intermittent Internet addiction and Internet addiction, respectively. Multivariate logistic regression analysis showed that junior high school students and students having a longer period of Internet use were significantly associated with intermittent addiction. In addition, male gender, chatting, and longer Internet use per day were significantly associated with Internet addiction. When the demographic and Internet-related factors were controlled, obsessive-compulsive and depressive symptoms were found to be independently associated factors for intermittent addiction and addiction to the Internet, respectively.\n\n\nCONCLUSIONS\nStaff working in junior or senior high schools should pay closer attention to those students who have the risk factors for intermittent addiction and addiction to the Internet. Early preventive intervention programs are needed that consider the individual severity level of Internet addiction.",
"title": ""
},
{
"docid": "neg:1840114_4",
"text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.",
"title": ""
},
{
"docid": "neg:1840114_5",
"text": "This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatiotemporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.",
"title": ""
},
{
"docid": "neg:1840114_6",
"text": "In this correspondence we have not addressed the problem of constructing actual codebooks. Information theory indicates that, in principle , one can construct a codebook by drawing each component of each codeword independently, using the distribution obtained from the Blahut algorithm. This procedure is not in general practical. Practical ways to construct codewords may be found in the extensive literature on vector quantization (see, e.g., the tutorial paper by R. M. Gray [19] or the book [20]). It is not clear at this point if codebook constructing methods from the vector quantizer literature are practical in the setting of this correspondence. Alternatively, one can trade complexity and performance and construct a scalar quantizer. In this case, the distribution obtained from the Blahut algorithm may be used in the Max–Lloyd algorithm [21], [22]. Grenander, \" Conditional-mean estimation via jump-diffusion processes in multiple target tracking/recog-nition, \" IEEE Trans.matic target recognition organized via jump-diffusion algorithms, \" IEEE bounds for estimators on matrix lie groups for atr, \" IEEE Trans. Abstract—A hyperspectral image can be considered as an image cube where the third dimension is the spectral domain represented by hundreds of spectral wavelengths. As a result, a hyperspectral image pixel is actually a column vector with dimension equal to the number of spectral bands and contains valuable spectral information that can be used to account for pixel variability, similarity, and discrimination. In this correspondence, we present a new hyperspectral measure, Spectral Information Measure (SIM), to describe spectral variability and two criteria, spectral information divergence and spectral discriminatory probability, for spectral similarity and discrimination, respectively. The spectral information measure is an information-theoretic measure which treats each pixel as a random variable using its spectral signature histogram as the desired probability distribution. Spectral Information Divergence (SID) compares the similarity between two pixels by measuring the probabilistic discrepancy between two corresponding spectral signatures. The spectral discriminatory probability calculates spectral probabilities of a spectral database (library) relative to a pixel to be identified so as to achieve material identification. In order to compare the discriminatory power of one spectral measure relative to another , a criterion is also introduced for performance evaluation, which is based on the power of discriminating one pixel from another relative to a reference pixel. The experimental results demonstrate that the new hyper-spectral measure can characterize spectral variability more effectively than the commonly used Spectral Angle Mapper (SAM).",
"title": ""
},
{
"docid": "neg:1840114_7",
"text": "The Internet of Things (IoT) is a distributed system of physical objects that requires the seamless integration of hardware (e.g., sensors, actuators, electronics) and network communications in order to collect and exchange data. IoT smart objects need to be somehow identified to determine the origin of the data and to automatically detect the elements around us. One of the best positioned technologies to perform identification is RFID (Radio Frequency Identification), which in the last years has gained a lot of popularity in applications like access control, payment cards or logistics. Despite its popularity, RFID security has not been properly handled in numerous applications. To foster security in such applications, this article includes three main contributions. First, in order to establish the basics, a detailed review of the most common flaws found in RFID-based IoT systems is provided, including the latest attacks described in the literature. Second, a novel methodology that eases the detection and mitigation of such flaws is presented. Third, the latest RFID security tools are analyzed and the methodology proposed is applied through one of them (Proxmark 3) to validate it. Thus, the methodology is tested in different scenarios where tags are commonly used for identification. In such systems it was possible to clone transponders, extract information, and even emulate both tags and readers. Therefore, it is shown that the methodology proposed is useful for auditing security and reverse engineering RFID communications in IoT applications. It must be noted that, although this paper is aimed at fostering RFID communications security in IoT applications, the methodology can be applied to any RFID communications protocol.",
"title": ""
},
{
"docid": "neg:1840114_8",
"text": "The paper presents a concept where pairs of ordinary RFID tags are exploited for use as remotely read moisture sensors. The pair of tags is incorporated into one label where one of the tags is embedded in a moisture absorbent material and the other is left open. In a humid environment the moisture concentration is higher in the absorbent material than the surrounding environment which causes degradation to the embedded tag's antenna in terms of dielectric losses and change of input impedance. The level of relative humidity or the amount of water in the absorbent material is determined for a passive RFID system by comparing the difference in RFID reader output power required to power up respectively the open and embedded tag. It is similarly shown how the backscattered signal strength of a semi-active RFID system is proportional to the relative humidity and amount of water in the absorbent material. Typical applications include moisture detection in buildings, especially from leaking water pipe connections hidden beyond walls. Presented solution has a cost comparable to ordinary RFID tags, and the passive system also has infinite life time since no internal power supply is needed. The concept is characterized for two commercial RFID systems, one passive operating at 868 MHz and one semi-active operating at 2.45 GHz.",
"title": ""
},
{
"docid": "neg:1840114_9",
"text": "BRIDGE bot is a 158 g, 10.7 × 8.9 × 6.5 cm3, magnetic-wheeled robot designed to traverse and inspect steel bridges. Utilizing custom magnetic wheels, the robot is able to securely adhere to the bridge in any orientation. The body platform features flexible, multi-material legs that enable a variety of plane transitions as well as robot shape manipulation. The robot is equipped with a Cortex-M0 processor, inertial sensors, and a modular wireless radio. A camera is included to provide images for detection and evaluation of identified problems. The robot has been demonstrated moving through plane transitions from 45° to 340° as well as over obstacles up to 9.5 mm in height. Preliminary use of sensor feedback to improve plane transitions has also been demonstrated.",
"title": ""
},
{
"docid": "neg:1840114_10",
"text": "Each generation that enters the workforce brings with it its own unique perspectives and values, shaped by the times of their life, about work and the work environment; thus posing atypical human resources management challenges. Following the completion of an extensive quantitative study conducted in Cyprus, and by adopting a qualitative methodology, the researchers aim to further explore the occupational similarities and differences of the two prevailing generations, X and Y, currently active in the workplace. Moreover, the study investigates the effects of the perceptual generational differences on managing the diverse hospitality workplace. Industry implications, recommendations for stakeholders as well as directions for further scholarly research are discussed.",
"title": ""
},
{
"docid": "neg:1840114_11",
"text": "Customer churn prediction models aim to detect customers with a high propensity to attrite. Predictive accuracy, comprehensibility, and justifiability are three key aspects of a churn prediction model. An accurate model permits to correctly target future churners in a retention marketing campaign, while a comprehensible and intuitive rule-set allows to identify the main drivers for customers to churn, and to develop an effective retention strategy in accordance with domain knowledge. This paper provides an extended overview of the literature on the use of data mining in customer churn prediction modeling. It is shown that only limited attention has been paid to the comprehensibility and the intuitiveness of churn prediction models. Therefore, two novel data mining techniques are applied to churn prediction modeling, and benchmarked to traditional rule induction techniques such as C4.5 and RIPPER. Both AntMiner+ and ALBA are shown to induce accurate as well as comprehensible classification rule-sets. AntMiner+ is a high performing data mining technique based on the principles of Ant Colony Optimization that allows to include domain knowledge by imposing monotonicity constraints on the final rule-set. ALBA on the other hand combines the high predictive accuracy of a non-linear support vector machine model with the comprehensibility of the rule-set format. The results of the benchmarking experiments show that ALBA improves learning of classification techniques, resulting in comprehensible models with increased performance. AntMiner+ results in accurate, comprehensible, but most importantly justifiable models, unlike the other modeling techniques included in this study. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840114_12",
"text": "A large number of saliency models, each based on a different hypothesis, have been proposed over the past 20 years. In practice, while subscribing to one hypothesis or computational principle makes a model that performs well on some types of images, it hinders the general performance of a model on arbitrary images and large-scale data sets. One natural approach to improve overall saliency detection accuracy would then be fusing different types of models. In this paper, inspired by the success of late-fusion strategies in semantic analysis and multi-modal biometrics, we propose to fuse the state-of-the-art saliency models at the score level in a para-boosting learning fashion. First, saliency maps generated by several models are used as confidence scores. Then, these scores are fed into our para-boosting learner (i.e., support vector machine, adaptive boosting, or probability density estimator) to generate the final saliency map. In order to explore the strength of para-boosting learners, traditional transformation-based fusion strategies, such as Sum, Min, and Max, are also explored and compared in this paper. To further reduce the computation cost of fusing too many models, only a few of them are considered in the next step. Experimental results show that score-level fusion outperforms each individual model and can further reduce the performance gap between the current models and the human inter-observer model.",
"title": ""
},
{
"docid": "neg:1840114_13",
"text": "In what ways do the online behaviors of wizards and ogres map to players’ actual leadership status in the offline world? What can we learn from players’ experience in Massively Multiplayer Online games (MMOGs) to advance our understanding of leadership, especially leadership in online settings (E-leadership)? As part of a larger agenda in the emerging field of empirically testing the ‘‘mapping’’ between the online and offline worlds, this study aims to tackle a central issue in the E-leadership literature: how have technology and technology mediated communications transformed leadership-diagnostic traits and behaviors? To answer this question, we surveyed over 18,000 players of a popular MMOG and also collected behavioral data of a subset of survey respondents over a four-month period. Motivated by leadership theories, we examined the connection between respondents’ offline leadership status and their in-game relationship-oriented and task-related-behaviors. Our results indicate that individuals’ relationship-oriented behaviors in the virtual world are particularly relevant to players’ leadership status in voluntary organizations, while their task-oriented behaviors are marginally linked to offline leadership status in voluntary organizations, but not in companies. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840114_14",
"text": "BACKGROUND\nTo increase understanding of the relationships among sexual violence, paraphilias, and mental illness, the authors assessed the legal and psychiatric features of 113 men convicted of sexual offenses.\n\n\nMETHOD\n113 consecutive male sex offenders referred from prison, jail, or probation to a residential treatment facility received structured clinical interviews for DSM-IV Axis I and II disorders, including sexual disorders. Participants' legal, sexual and physical abuse, and family psychiatric histories were also evaluated. We compared offenders with and without paraphilias.\n\n\nRESULTS\nParticipants displayed high rates of lifetime Axis I and Axis II disorders: 96 (85%) had a substance use disorder; 84 (74%), a paraphilia; 66 (58%), a mood disorder (40 [35%], a bipolar disorder and 27 [24%], a depressive disorder); 43 (38%), an impulse control disorder; 26 (23%), an anxiety disorder; 10 (9%), an eating disorder; and 63 (56%), antisocial personality disorder. Presence of a paraphilia correlated positively with the presence of any mood disorder (p <.001), major depression (p =.007), bipolar I disorder (p =.034), any anxiety disorder (p=.034), any impulse control disorder (p =.006), and avoidant personality disorder (p =.013). Although offenders without paraphilias spent more time in prison than those with paraphilias (p =.019), paraphilic offenders reported more victims (p =.014), started offending at a younger age (p =.015), and were more likely to perpetrate incest (p =.005). Paraphilic offenders were also more likely to be convicted of (p =.001) or admit to (p <.001) gross sexual imposition of a minor. Nonparaphilic offenders were more likely to have adult victims exclusively (p =.002), a prior conviction for theft (p <.001), and a history of juvenile offenses (p =.058).\n\n\nCONCLUSIONS\nSex offenders in the study population displayed high rates of mental illness, substance abuse, paraphilias, personality disorders, and comorbidity among these conditions. Sex offenders with paraphilias had significantly higher rates of certain types of mental illness and avoidant personality disorder. Moreover, paraphilic offenders spent less time in prison but started offending at a younger age and reported more victims and more non-rape sexual offenses against minors than offenders without paraphilias. On the basis of our findings, we assert that sex offenders should be carefully evaluated for the presence of mental illness and that sex offender management programs should have a capacity for psychiatric treatment.",
"title": ""
},
{
"docid": "neg:1840114_15",
"text": "We propose and demonstrate a scheme for boosting the efficiency of entanglement distribution based on a decoherence-free subspace over lossy quantum channels. By using backward propagation of a coherent light, our scheme achieves an entanglement-sharing rate that is proportional to the transmittance T of the quantum channel in spite of encoding qubits in multipartite systems for the decoherence-free subspace. We experimentally show that highly entangled states, which can violate the Clauser-Horne-Shimony-Holt inequality, are distributed at a rate proportional to T.",
"title": ""
},
{
"docid": "neg:1840114_16",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "neg:1840114_17",
"text": "This paper presents a novel flexible sliding thigh frame for a gait enhancing mechatronic system. With its two-layered unique structure, the frame is flexible in certain locations and directions, and stiff at certain other locations, so that it can fît well to the wearer's thigh and transmit the assisting torque without joint loading. The paper describes the basic mechanics of this 3D flexible frame and its stiffness characteristics. We implemented the 3D flexible frame on a gait enhancing mechatronic system and conducted experiments. The performance of the proposed mechanism is verified by simulation and experiments.",
"title": ""
},
{
"docid": "neg:1840114_18",
"text": "In psychodynamic theory, trauma is associated with a life event, which is defined by its intensity, by the inability of the person to respond adequately and by its pathologic longlasting effects on the psychic organization. In this paper, we describe how neurobiological changes link to psychodynamic theory. Initially, Freud believed that all types of neurosis were the result of former traumatic experiences, mainly in the form of sexual trauma. According to the first Freudian theory (1890–1897), hysteric patients suffer mainly from relevant memories. In his later theory of ‘differed action’, i.e., the retroactive attribution of sexual or traumatic meaning to earlier events, Freud links the consequences of sexual trauma in childhood with the onset of pathology in adulthood (Boschan, 2008). The transmission of trauma from parents to children may take place from one generation to the other. The trauma that is being experienced by the child has an interpersonal character and is being reinforced by the parents’ own traumatic experience. The subject’s interpersonal exposure through the relationship with the direct victims has been recognized as a risk factor for the development of a post-traumatic stress disorder. Trauma may be transmitted from the mother to the foetus during the intrauterine life (Opendak & Sullivan, 2016). Empirical studies also demonstrate that in the first year of life infants that had witnessed violence against their mothers presented symptoms of a posttraumatic disorder. Traumatic symptomatology in infants includes eating difficulties, sleep disorders, high arousal level and excessive crying, affect disorders and relational problems with adults and peers. Infants that are directly dependant to the caregiver are more vulnerable and at a greater risk to suffer interpersonal trauma and its neurobiological consequences (Opendak & Sullivan, 2016). In older children symptoms were more related to the severity of violence they had been exposed to than to the mother’s actual emotional state, which shows that the relationship between mother’s and child’s trauma is different in each age stage. The type of attachment and the quality of the mother-child interactional relationship contribute also to the transmission of the trauma. According to Fonagy (2003), the mother who is experiencing trauma is no longer a source of security and becomes a source of danger. Thus, the mentalization ability may be destroyed by an attachment figure, which caused to the child enough stress related to its own thoughts and emotions to an extent, that the child avoids thoughts about the other’s subjective experience. At a neurobiological level, many studies have shown that the effects of environmental stress on the brain are being mediated through molecular and cellular mechanisms. More specifically, trauma causes changes at a chemical and anatomical level resulting in transforming the subject’s response to future stress. The imprinting mechanisms of traumatic experiences are directly related to the activation of the neurobiological circuits associated with emotion, in which amygdala play a central role. The traumatic experiences are strongly encoded in memory and difficult to be erased. Early stress may result in impaired cognitive function related to disrupted functioning of certain areas of the hippocampus in the short or long term. Infants or young children that have suffered a traumatic experience may are unable to recollect events in a conscious way. However, they may maintain latent memory of the reactions to the experience and the intensity of the emotion. The neurobiological data support the ‘deferred action’ of the psychodynamic theory according which when the impact of early interpersonal trauma is so pervasive, the effects can transcend into later stages, even after the trauma has stopped. The two approaches, psychodynamic and neurobiological, are not opposite, but complementary. Psychodynamic psychotherapists and neurobiologists, based on extended theoretical bases, combine data and enrich the understanding of psychiatric disorders in childhood. The study of interpersonal trauma offers a good example of how different approaches, biological and psychodynamic, may come closer and possibly be unified into a single model, which could result in more effective therapeutic approaches.",
"title": ""
},
{
"docid": "neg:1840114_19",
"text": "From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.",
"title": ""
}
] |
1840115 | Model-based Software Testing | [
{
"docid": "pos:1840115_0",
"text": "The use of context-free grammars to improve functional testing of very-large-scale integrated circuits is described. It is shown that enhanced context-free grammars are effective tools for generating test data. The discussion covers preliminary considerations, the first tests, generating systematic tests, and testing subroutines. The author's experience using context-free grammars to generate tests for VLSI circuit simulators indicates that they are remarkably effective tools that virtually anyone can use to debug virtually any program.<<ETX>>",
"title": ""
}
] | [
{
"docid": "neg:1840115_0",
"text": "Assortment planning of substitutable products is a major operational issue that arises in many industries, such as retailing, airlines and consumer electronics. We consider a single-period joint assortment and inventory planning problem under dynamic substitution with stochastic demands, and provide complexity and algorithmic results as well as insightful structural characterizations of near-optimal solutions for important variants of the problem. First, we show that the assortment planning problem is NP-hard even for a very simple consumer choice model, where each customer is willing to buy only two products. In fact, we show that the problem is hard to approximate within a factor better than 1− 1/e. Secondly, we show that for several interesting and practical choice models, one can devise a polynomial-time approximation scheme (PTAS), i.e., the problem can be solved efficiently to within any level of accuracy. To the best of our knowledge, this is the first efficient algorithm with provably near-optimal performance guarantees for assortment planning problems under dynamic substitution. Quite surprisingly, the algorithm we propose stocks only a constant number of different product types; this constant depends only on the desired accuracy level. This provides an important managerial insight that assortments with a relatively small number of product types can obtain almost all of the potential revenue. Furthermore, we show that our algorithm can be easily adapted for more general choice models, and present numerical experiments to show that it performs significantly better than other known approaches.",
"title": ""
},
{
"docid": "neg:1840115_1",
"text": "An experimental program of steel panel shear walls is outlined and some results are presented. The tested specimens utilized low yield strength (LYS) steel infill panels and reduced beam sections (RBS) at the beam-ends. Two specimens make allowances for penetration of the panel by utilities, which would exist in a retrofit situation. The first, consisting of multiple holes, or perforations, in the steel panel, also has the characteristic of further reducing the corresponding solid panel strength (as compared with the use of traditional steel). The second such specimen utilizes quarter-circle cutouts in the panel corners, which are reinforced to transfer the panel forces to the adjacent framing.",
"title": ""
},
{
"docid": "neg:1840115_2",
"text": "During the recent years the mainstream framework for HCI research — the informationprocessing cognitive psychology —has gained more and more criticism because of serious problems in applying it both in research and practical design. In a debate within HCI research the capability of information processing psychology has been questioned and new theoretical frameworks searched. This paper presents an overview of the situation and discusses potentials of Activity Theory as an alternative framework for HCI research and design.",
"title": ""
},
{
"docid": "neg:1840115_3",
"text": "Musical training has emerged as a useful framework for the investigation of training-related plasticity in the human brain. Learning to play an instrument is a highly complex task that involves the interaction of several modalities and higher-order cognitive functions and that results in behavioral, structural, and functional changes on time scales ranging from days to years. While early work focused on comparison of musical experts and novices, more recently an increasing number of controlled training studies provide clear experimental evidence for training effects. Here, we review research investigating brain plasticity induced by musical training, highlight common patterns and possible underlying mechanisms of such plasticity, and integrate these studies with findings and models for mechanisms of plasticity in other domains.",
"title": ""
},
{
"docid": "neg:1840115_4",
"text": "Content Security Policy (CSP) is an emerging W3C standard introduced to mitigate the impact of content injection vulnerabilities on websites. We perform a systematic, large-scale analysis of four key aspects that impact on the effectiveness of CSP: browser support, website adoption, correct configuration and constant maintenance. While browser support is largely satisfactory, with the exception of few notable issues, our analysis unveils several shortcomings relative to the other three aspects. CSP appears to have a rather limited deployment as yet and, more crucially, existing policies exhibit a number of weaknesses and misconfiguration errors. Moreover, content security policies are not regularly updated to ban insecure practices and remove unintended security violations. We argue that many of these problems can be fixed by better exploiting the monitoring facilities of CSP, while other issues deserve additional research, being more rooted into the CSP design.",
"title": ""
},
{
"docid": "neg:1840115_5",
"text": "In winter, rainbow smelt (Osmerus mordax) accumulate glycerol and produce an antifreeze protein (AFP), which both contribute to freeze resistance. The role of differential gene expression in the seasonal pattern of these adaptations was investigated. First, cDNAs encoding smelt and Atlantic salmon (Salmo salar) phosphoenolpyruvate carboxykinase (PEPCK) and smelt glyceraldehyde-3-phosphate dehydrogenase (GAPDH) were cloned so that all sequences required for expression analysis would be available. Using quantitative PCR, expression of beta actin in rainbow smelt liver was compared with that of GAPDH in order to determine its validity as a reference gene. Then, levels of glycerol-3-phosphate dehydrogenase (GPDH), PEPCK, and AFP relative to beta actin were measured in smelt liver over a fall-winter-spring interval. Levels of GPDH mRNA increased in the fall just before plasma glycerol accumulation, implying a driving role in glycerol synthesis. GPDH mRNA levels then declined during winter, well in advance of serum glycerol, suggesting the possibility of GPDH enzyme or glycerol conservation in smelt during the winter months. PEPCK mRNA levels rose in parallel with serum glycerol in the fall, consistent with an increasing requirement for amino acids as metabolic precursors, remained elevated for much of the winter, and then declined in advance of the decline in plasma glycerol. AFP mRNA was elevated at the onset of fall sampling in October and remained elevated until April, implying separate regulation from GPDH and PEPCK. Thus, winter freezing point depression in smelt appears to result from a seasonal cycle of GPDH gene expression, with an ensuing increase in the expression of PEPCK, and a similar but independent cycle of AFP gene expression.",
"title": ""
},
{
"docid": "neg:1840115_6",
"text": "As education communities grow more interested in STEM (science, technology, engineering, and mathematics), schools have integrated more technology and engineering opportunities into their curricula. Makerspaces for all ages have emerged as a way to support STEM learning through creativity, community building, and hands-on learning. However, little research has evaluated the learning that happens in these spaces, especially in young children. One framework that has been used successfully as an evaluative tool in informal and technology-rich learning spaces is Positive Technological Development (PTD). PTD is an educational framework that describes positive behaviors children exhibit while engaging in digital learning experiences. In this exploratory case study, researchers observed children in a makerspace to determine whether the environment (the space and teachers) contributed to children’s Positive Technological Development. N = 20 children and teachers from a Kindergarten classroom were observed over 6 hours as they engaged in makerspace activities. The children’s activity, teacher’s facilitation, and the physical space were evaluated for alignment with the PTD framework. Results reveal that children showed high overall PTD engagement, and that teachers and the space supported children’s learning in complementary aspects of PTD. Recommendations for practitioners hoping to design and implement a young children’s makerspace are discussed.",
"title": ""
},
{
"docid": "neg:1840115_7",
"text": "Advances in computing technology and computer graphics engulfed with huge collections of data have introduced new visualization techniques. This gives users many choices of visualization techniques to gain an insight about the dataset at hand. However, selecting the most suitable visualization for a given dataset and the task to be performed on the data is subjective. The work presented here introduces a set of visualization metrics to quantify visualization techniques. Based on a comprehensive literature survey, we propose effectiveness, expressiveness, readability, and interactivity as the visualization metrics. Using these metrics, a framework for optimizing the layout of a visualization technique is also presented. The framework is based on an evolutionary algorithm (EA) which uses treemaps as a case study. The EA starts with a randomly initialized population, where each chromosome of the population represents one complete treemap. Using the genetic operators and the proposed visualization metrics as an objective function, the EA finds the optimum visualization layout. The visualizations that evolved are compared with the state-of-the-art treemap visualization tool through a user study. The user study utilizes benchmark tasks for the evaluation. A comparison is also performed using direct assessment, where internal and external visualization metrics are used. Results are further verified using analysis of variance (ANOVA) test. The results suggest better performance of the proposed metrics and the EA-based framework for optimizing visualization layout. The proposed methodology can also be extended to other visualization techniques. © 2017 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840115_8",
"text": "There is much debate as to whether online offenders are a distinct group of sex offenders or if they are simply typical sex offenders using a new technology. A meta-analysis was conducted to examine the extent to which online and offline offenders differ on demographic and psychological variables. Online offenders were more likely to be Caucasian and were slightly younger than offline offenders. In terms of psychological variables, online offenders had greater victim empathy, greater sexual deviancy, and lower impression management than offline offenders. Both online and offline offenders reported greater rates of childhood physical and sexual abuse than the general population. Additionally, online offenders were more likely to be Caucasian, younger, single, and unemployed compared with the general population. Many of the observed differences can be explained by assuming that online offenders, compared with offline offenders, have greater self-control and more psychological barriers to acting on their deviant interests.",
"title": ""
},
{
"docid": "neg:1840115_9",
"text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.",
"title": ""
},
{
"docid": "neg:1840115_10",
"text": "OBJECTIVE\nTo examine psychometric properties of the Self-Care Inventory-revised (SCI-R), a self-report measure of perceived adherence to diabetes self-care recommendations, among adults with diabetes.\n\n\nRESEARCH DESIGN AND METHODS\nWe used three data sets of adult type 1 and type 2 diabetic patients to examine psychometric properties of the SCI-R. Principal component and factor analyses examined whether a general factor or common factors were present. Associations with measures of theoretically related concepts were examined to assess SCI-R concurrent and convergent validity. Internal reliability coefficients were calculated. Responsiveness was assessed using paired t tests, effect size, and Guyatt's statistic for type 1 patients who completed psychoeducation.\n\n\nRESULTS\nPrincipal component and factor analyses identified a general factor but no consistent common factors. Internal consistency of the SCI-R was alpha = 0.87. Correlation with a measure of frequency of diabetes self-care behaviors was r = 0.63, providing evidence for SCI-R concurrent validity. The SCI-R correlated with diabetes-related distress (r = -0.36), self-esteem (r = 0.25), self-efficacy (r = 0.47), depression (r = -0.22), anxiety (r = -0.24), and HbA(1c) (r = -0.37), supporting construct validity. Responsiveness analyses showed SCI-R scores improved with diabetes psychoeducation with a medium effect size of 0.62 and a Guyatt's statistic of 0.85.\n\n\nCONCLUSIONS\nThe SCI-R is a brief, psychometrically sound measure of perceptions of adherence to recommended diabetes self-care behaviors of adults with type 1 or type 2 diabetes.",
"title": ""
},
{
"docid": "neg:1840115_11",
"text": "This and the companion paper present an analysis of the amplitude and time-dependent changes of the apparent frequency of a seven-story reinforced-concrete hotel building in Van Nuys, Calif. Data of recorded response to 12 earthquakes are used, representing very small, intermediate, and large excitations (peak ground velocity, vmax = 0.6 2 11, 23, and 57 cm/s, causing no minor and major damage). This paper presents a description of the building structure, foundation, and surrounding soil, the strong motion data used in the analysis, the soil-structure interaction model assumed, and results of Fourier analysis of the recorded response. The results show that the apparent frequency changes form one earthquake to another. The general trend is a reduction with increasing amplitudes of motion. The smallest values (measured during the damaging motions) are 0.4 and 0.5 Hz for the longitudinal and transverse directions. The largest values are 1.1 and 1.4 Hz, respectively, determined from response to ambient noise after the damage occurred. This implies 64% reduction of the system frequency, or a factor '3 change, from small to large response amplitudes, and is interpreted to be caused by nonlinearities in the soil.",
"title": ""
},
{
"docid": "neg:1840115_12",
"text": "•Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Unable to model unseen words, learns poor representations for infrequent words, and unable to capture character-level patterns.",
"title": ""
},
{
"docid": "neg:1840115_13",
"text": "To refine user interest profiling, this paper focuses on extending scientific subject ontology via keyword clustering and on improving the accuracy and effectiveness of recommendation of the electronic academic publications in online services. A clustering approach is proposed for domain keywords for the purpose of the subject ontology extension. Based on the keyword clusters, the construction of user interest profiles is presented on a rather fine granularity level. In the construction of user interest profiles, we apply two types of interest profiles: explicit profiles and implicit profiles. The explicit eighted keyword graph",
"title": ""
},
{
"docid": "neg:1840115_14",
"text": "Classrooms are complex social systems, and student-teacher relationships and interactions are also complex, multicomponent systems. We posit that the nature and quality of relationship interactions between teachers and students are fundamental to understanding student engagement, can be assessed through standardized observation methods, and can be changed by providing teachers knowledge about developmental processes relevant for classroom interactions and personalized feedback/support about their interactive behaviors and cues. When these supports are provided to teachers’ interactions, student engagement increases. In this chapter, we focus on the theoretical and empirical links between interactions and engagement and present an approach to intervention designed to increase the quality of such interactions and, in turn, increase student engagement and, ultimately, learning and development. Recognizing general principles of development in complex systems, a theory of the classroom as a setting for development, and a theory of change specifi c to this social setting are the ultimate goals of this work. Engagement, in this context, is both an outcome in its own R. C. Pianta , Ph.D. (*) Curry School of Education , University of Virginia , PO Box 400260 , Charlottesville , VA 22904-4260 , USA e-mail: rcp4p@virginia.edu B. K. Hamre , Ph.D. Center for Advanced Study of Teaching and Learning , University of Virginia , Charlottesville , VA , USA e-mail: bkh3d@virginia.edu J. P. Allen , Ph.D. Department of Psychology , University of Virginia , Charlottesville , VA , USA e-mail: allen@virginia.edu Teacher-Student Relationships and Engagement: Conceptualizing, Measuring, and Improving the Capacity of Classroom Interactions* Robert C. Pianta , Bridget K. Hamre , and Joseph P. Allen *Preparation of this chapter was supported in part by the Wm. T. Grant Foundation, the Foundation for Child Development, and the Institute of Education Sciences. 366 R.C. Pianta et al.",
"title": ""
},
{
"docid": "neg:1840115_15",
"text": "AIM\nWe investigated the uptake and pharmacokinetics of l-ergothioneine (ET), a dietary thione with free radical scavenging and cytoprotective capabilities, after oral administration to humans, and its effect on biomarkers of oxidative damage and inflammation.\n\n\nRESULTS\nAfter oral administration, ET is avidly absorbed and retained by the body with significant elevations in plasma and whole blood concentrations, and relatively low urinary excretion (<4% of administered ET). ET levels in whole blood were highly correlated to levels of hercynine and S-methyl-ergothioneine, suggesting that they may be metabolites. After ET administration, some decreasing trends were seen in biomarkers of oxidative damage and inflammation, including allantoin (urate oxidation), 8-hydroxy-2'-deoxyguanosine (DNA damage), 8-iso-PGF2α (lipid peroxidation), protein carbonylation, and C-reactive protein. However, most of the changes were non-significant.\n\n\nINNOVATION\nThis is the first study investigating the administration of pure ET to healthy human volunteers and monitoring its uptake and pharmacokinetics. This compound is rapidly gaining attention due to its unique properties, and this study lays the foundation for future human studies.\n\n\nCONCLUSION\nThe uptake and retention of ET by the body suggests an important physiological function. The decreasing trend of oxidative damage biomarkers is consistent with animal studies suggesting that ET may function as a major antioxidant but perhaps only under conditions of oxidative stress. Antioxid. Redox Signal. 26, 193-206.",
"title": ""
},
{
"docid": "neg:1840115_16",
"text": "Green Mining is a field of MSR that studies software energy consumption and relies on software performance data. Unfortunately there is a severe lack of publicly available software power use performance data. This means that green mining researchers must generate this data themselves by writing tests, building multiple revisions of a product, and then running these tests multiple times (10+) for each software revision while measuring power use. Then, they must aggregate these measurements to estimate the energy consumed by the tests for each software revision. This is time consuming and is made more difficult by the constraints of mobile devices and their OSes. In this paper we propose, implement, and demonstrate Green Miner: the first dedicated hardware mining software repositories testbed. The Green Miner physically measures the energy consumption of mobile devices (Android phones) and automates the testing of applications, and the reporting of measurements back to developers and researchers. The Green Miner has already produced valuable results for commercial Android application developers, and has been shown to replicate other power studies' results.",
"title": ""
},
{
"docid": "neg:1840115_17",
"text": "Nowadays, health disease are increasing day by day due to life style, hereditary. Especially, heart disease has become more common these days, i.e. life of people is at risk. Each individual has different values for Blood pressure, cholesterol and pulse rate. But according to medically proven results the normal values of Blood pressure is 120/90, cholesterol is and pulse rate is 72. This paper gives the survey about different classification techniques used for predicting the risk level of each person based on age, gender, Blood pressure, cholesterol, pulse rate. The patient risk level is classified using datamining classification techniques such as Naïve Bayes, KNN, Decision Tree Algorithm, Neural Network. etc., Accuracy of the risk level is high when using more number of attributes.",
"title": ""
},
{
"docid": "neg:1840115_18",
"text": "A wealth of research has established that practice tests improve memory for the tested material. Although the benefits of practice tests are well documented, the mechanisms underlying testing effects are not well understood. We propose the mediator effectiveness hypothesis, which states that more-effective mediators (that is, information linking cues to targets) are generated during practice involving tests with restudy versus during restudy only. Effective mediators must be retrievable at time of test and must elicit the target response. We evaluated these two components of mediator effectiveness for learning foreign language translations during practice involving either test-restudy or restudy only. Supporting the mediator effectiveness hypothesis, test-restudy practice resulted in mediators that were more likely to be retrieved and more likely to elicit targets on a final test.",
"title": ""
},
{
"docid": "neg:1840115_19",
"text": "Attributed graphs are becoming important tools for modeling information networks, such as the Web and various social networks (e.g. Facebook, LinkedIn, Twitter). However, it is computationally challenging to manage and analyze attributed graphs to support effective decision making. In this paper, we propose, Pagrol, a parallel graph OLAP (Online Analytical Processing) system over attributed graphs. In particular, Pagrol introduces a new conceptual Hyper Graph Cube model (which is an attributed-graph analogue of the data cube model for relational DBMS) to aggregate attributed graphs at different granularities and levels. The proposed model supports different queries as well as a new set of graph OLAP Roll-Up/Drill-Down operations. Furthermore, on the basis of Hyper Graph Cube, Pagrol provides an efficient MapReduce-based parallel graph cubing algorithm, MRGraph-Cubing, to compute the graph cube for an attributed graph. Pagrol employs numerous optimization techniques: (a) a self-contained join strategy to minimize I/O cost; (b) a scheme that groups cuboids into batches so as to minimize redundant computations; (c) a cost-based scheme to allocate the batches into bags (each with a small number of batches); and (d) an efficient scheme to process a bag using a single MapReduce job. Results of extensive experimental studies using both real Facebook and synthetic datasets on a 128-node cluster show that Pagrol is effective, efficient and scalable.",
"title": ""
}
] |
1840116 | Three-Port Series-Resonant DC–DC Converter to Interface Renewable Energy Sources With Bidirectional Load and Energy Storage Ports | [
{
"docid": "pos:1840116_0",
"text": "This letter proposes a novel converter topology that interfaces three power ports: a source, a bidirectional storage port, and an isolated load port. The proposed converter is based on a modified version of the isolated half-bridge converter topology that utilizes three basic modes of operation within a constant-frequency switching cycle to provide two independent control variables. This allows tight control over two of the converter ports, while the third port provides the power balance in the system. The switching sequence ensures a clamping path for the energy of the leakage inductance of the transformer at all times. This energy is further utilized to achieve zero-voltage switching for all primary switches for a wide range of source and load conditions. Basic steady-state analysis of the proposed converter is included, together with a suggested structure for feedback control. Key experimental results are presented that validate the converter operation and confirm its ability to achieve tight independent control over two power processing paths. This topology promises significant savings in component count and losses for power-harvesting systems. The proposed topology and control is particularly relevant to battery-backed power systems sourced by solar or fuel cells",
"title": ""
},
{
"docid": "pos:1840116_1",
"text": "A three-port triple-half-bridge bidirectional dc-dc converter topology is proposed in this paper. The topology comprises a high-frequency three-winding transformer and three half-bridges, one of which is a boost half-bridge interfacing a power port with a wide operating voltage. The three half-bridges are coupled by the transformer, thereby providing galvanic isolation for all the power ports. The converter is controlled by phase shift, which achieves the primary power flow control, in combination with pulsewidth modulation (PWM). Because of the particular structure of the boost half-bridge, voltage variations at the port can be compensated for by operating the boost half-bridge, together with the other two half-bridges, at an appropriate duty cycle to keep a constant voltage across the half-bridge. The resulting waveforms applied to the transformer windings are asymmetrical due to the automatic volt-seconds balancing of the half-bridges. With the PWM control it is possible to reduce the rms loss and to extend the zero-voltage switching operating range to the entire phase shift region. A fuel cell and supercapacitor generation system is presented as an embodiment of the proposed multiport topology. The theoretical considerations are verified by simulation and with experimental results from a 1 kW prototype.",
"title": ""
},
{
"docid": "pos:1840116_2",
"text": "Multiport dc-dc converters are particularly interesting for sustainable energy generation systems where diverse sources and storage elements are to be integrated. This paper presents a zero-voltage switching (ZVS) three-port bidirectional dc-dc converter. A simple and effective duty ratio control method is proposed to extend the ZVS operating range when input voltages vary widely. Soft-switching conditions over the full operating range are achievable by adjusting the duty ratio of the voltage applied to the transformer winding in response to the dc voltage variations at the port. Keeping the volt-second product (half-cycle voltage-time integral) equal for all the windings leads to ZVS conditions over the entire operating range. A detailed analysis is provided for both the two-port and the three-port converters. Furthermore, for the three-port converter a dual-PI-loop based control strategy is proposed to achieve constant output voltage, power flow management, and soft-switching. The three-port converter is implemented and tested for a fuel cell and supercapacitor system.",
"title": ""
}
] | [
{
"docid": "neg:1840116_0",
"text": "Microalgae have received much interest as a biofuel feedstock in response to the uprising energy crisis, climate change and depletion of natural sources. Development of microalgal biofuels from microalgae does not satisfy the economic feasibility of overwhelming capital investments and operations. Hence, high-value co-products have been produced through the extraction of a fraction of algae to improve the economics of a microalgae biorefinery. Examples of these high-value products are pigments, proteins, lipids, carbohydrates, vitamins and anti-oxidants, with applications in cosmetics, nutritional and pharmaceuticals industries. To promote the sustainability of this process, an innovative microalgae biorefinery structure is implemented through the production of multiple products in the form of high value products and biofuel. This review presents the current challenges in the extraction of high value products from microalgae and its integration in the biorefinery. The economic potential assessment of microalgae biorefinery was evaluated to highlight the feasibility of the process.",
"title": ""
},
{
"docid": "neg:1840116_1",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "neg:1840116_2",
"text": "Impervious surface has been recognized as a key indicator in assessing urban environments. However, accurate impervious surface extraction is still a challenge. Effectiveness of impervious surface in urban land-use classification has not been well addressed. This paper explored extraction of impervious surface information from Landsat Enhanced Thematic Mapper data based on the integration of fraction images from linear spectral mixture analysis and land surface temperature. A new approach for urban land-use classification, based on the combined use of impervious surface and population density, was developed. Five urban land-use classes (i.e., low-, medium-, high-, and very-high-intensity residential areas, and commercial/industrial/transportation uses) were developed in the city of Indianapolis, Indiana, USA. Results showed that the integration of fraction images and surface temperature provided substantially improved impervious surface image. Accuracy assessment indicated that the rootmean-square error and system error yielded 9.22% and 5.68%, respectively, for the impervious surface image. The overall classification accuracy of 83.78% for five urban land-use classes was obtained. © 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840116_3",
"text": "Vertebrate CpG islands (CGIs) are short interspersed DNA sequences that deviate significantly from the average genomic pattern by being GC-rich, CpG-rich, and predominantly nonmethylated. Most, perhaps all, CGIs are sites of transcription initiation, including thousands that are remote from currently annotated promoters. Shared DNA sequence features adapt CGIs for promoter function by destabilizing nucleosomes and attracting proteins that create a transcriptionally permissive chromatin state. Silencing of CGI promoters is achieved through dense CpG methylation or polycomb recruitment, again using their distinctive DNA sequence composition. CGIs are therefore generically equipped to influence local chromatin structure and simplify regulation of gene activity.",
"title": ""
},
{
"docid": "neg:1840116_4",
"text": "Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown, our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.",
"title": ""
},
{
"docid": "neg:1840116_5",
"text": "We present the performance of a patient with acquired dysgraphia, DS, who has intact oral spelling (100% correct) but severely impaired written spelling (7% correct). Her errors consisted entirely of well-formed letter substitutions. This striking dissociation is further characterized by consistent preservation of orthographic, as opposed to phonological, length in her written output. This pattern of performance indicates that DS has intact graphemic representations, and that her errors are due to a deficit in letter shape assignment. We further interpret the occurrence of a small percentage of lexical errors in her written responses and a significant effect of letter frequencies and transitional probabilities on the pattern of letter substitutions as the result of a repair mechanism that locally constrains DS' written output.",
"title": ""
},
{
"docid": "neg:1840116_6",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840116_7",
"text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.",
"title": ""
},
{
"docid": "neg:1840116_8",
"text": "In her book Introducing Arguments, Linda Pylkkänen distinguishes between the core and noncore arguments of verbs by means of a detailed discussion of applicative and causative constructions. The term applicative refers to structures that in more general linguistic terms are defined as ditransitive, i.e. when both a direct and an indirect object are associated with the verb, as exemplified in (1) (Pylkkänen, 2008: 13):",
"title": ""
},
{
"docid": "neg:1840116_9",
"text": "The rapid digitalisation of the hospitality industry over recent years has brought forth many new points of attack for consideration. The hasty implementation of these systems has created a reality in which businesses are using the technical solutions, but employees have very little awareness when it comes to the threats and implications that they might present. This gap in awareness is further compounded by the existence of preestablished, often rigid, cultures that drive how hospitality businesses operate. Potential attackers are recognising this and the last two years have seen a huge increase in cyber-attacks within the sector.Attempts at addressing the increasing threats have taken the form of technical solutions such as encryption, access control, CCTV, etc. However, a high majority of security breaches can be directly attributed to human error. It is therefore necessary that measures for addressing the rising trend of cyber-attacks go beyond just providing technical solutions and make provision for educating employees about how to address the human elements of security. Inculcating security awareness amongst hospitality employees will provide a foundation upon which a culture of security can be created to promote the seamless and secured interaction of hotel users and technology.One way that the hospitality industry has tried to solve the awareness issue is through their current paper-based training. This is unengaging, expensive and presents limited ways to deploy, monitor and evaluate the impact and effectiveness of the content. This leads to cycles of constant training, making it very hard to initiate awareness, particularly within those on minimum waged, short-term job roles.This paper presents a structured approach for eliciting industry requirement for developing and implementing an immersive Cyber Security Awareness learning platform. It used a series of over 40 interviews and threat analysis of the hospitality industry to identify the requirements for designing and implementing cyber security program which encourage engagement through a cycle of reward and recognition. In particular, the need for the use of gamification elements to provide an engaging but gentle way of educating those with little or no desire to learn was identified and implemented. Also presented is a method for guiding and monitoring the impact of their employee’s progress through the learning management system whilst monitoring the levels of engagement and positive impact the training is having on the business.",
"title": ""
},
{
"docid": "neg:1840116_10",
"text": "Dark Web analysis is an important aspect in field of counter terrorism (CT). In the present scenario terrorist attacks are biggest problem for the mankind and whole world is under constant threat from these well-planned, sophisticated and coordinated terrorist operations. Terrorists anonymously set up various web sites embedded in the public Internet, exchanging ideology, spreading propaganda, and recruiting new members. Dark web is a hotspot where terrorists are communicating and spreading their messages. Now every country is focusing for CT. Dark web analysis can be an efficient proactive method for CT by detecting and avoiding terrorist threats/attacks. In this paper we have proposed dark web analysis model that analyzes dark web forums for CT and connecting the dots to prevent the country from terrorist attacks.",
"title": ""
},
{
"docid": "neg:1840116_11",
"text": "Do men die young and sick, or do women live long and healthy? By trying to explain the sexual dimorphism in life expectancy, both biological and environmental aspects are presently being addressed. Besides age-related changes, both the immune and the endocrine system exhibit significant sex-specific differences. This review deals with the aging immune system and its interplay with sex steroid hormones. Together, they impact on the etiopathology of many infectious diseases, which are still the major causes of morbidity and mortality in people at old age. Among men, susceptibilities toward many infectious diseases and the corresponding mortality rates are higher. Responses to various types of vaccination are often higher among women thereby also mounting stronger humoral responses. Women appear immune-privileged. The major sex steroid hormones exhibit opposing effects on cells of both the adaptive and the innate immune system: estradiol being mainly enhancing, testosterone by and large suppressive. However, levels of sex hormones change with age. At menopause transition, dropping estradiol potentially enhances immunosenescence effects posing postmenopausal women at additional, yet specific risks. Conclusively during aging, interventions, which distinctively consider the changing level of individual hormones, shall provide potent options in maintaining optimal immune functions.",
"title": ""
},
{
"docid": "neg:1840116_12",
"text": "In this paper, higher-order correlation clustering (HOCC) is used for text line detection in natural images. We treat text line detection as a graph partitioning problem, where each vertex is represented by a Maximally Stable Extremal Region (MSER). First, weak hypothesises are proposed by coarsely grouping MSERs based on their spatial alignment and appearance consistency. Then, higher-order correlation clustering (HOCC) is used to partition the MSERs into text line candidates, using the hypotheses as soft constraints to enforce long range interactions. We further propose a regularization method to solve the Semidefinite Programming problem in the inference. Finally we use a simple texton-based texture classifier to filter out the non-text areas. This framework allows us to naturally handle multiple orientations, languages and fonts. Experiments show that our approach achieves competitive performance compared to the state of the art.",
"title": ""
},
{
"docid": "neg:1840116_13",
"text": "The purpose of this note is to describe a useful lesson we learned on authentication protocol design. In a recent article [9], we presented a simple authentication protocol to illustrate the concept of a trusted server. The protocol has a flaw, which was brought to our attention by Mart~n Abadi of DEC. In what follows, we first describe the protocol and its flaw, and how the flaw-was introduced in the process of deriving the protocol from its correct full information version. We then introduce a principle, called the Principle of Full Information, and explain how its use could have prevented the protocol flaw. We believe the Principle of Full Information is a useful authentication protocol design principle, and advocate its use. Lastly, we present several heuristics for simplifying full information protocols and illustrate their application to a mutual authentication protocol.",
"title": ""
},
{
"docid": "neg:1840116_14",
"text": "Why do some new technologies emerge and quickly supplant incumbent technologies while others take years or decades to take off? We explore this question by presenting a framework that considers both the focal competing technologies as well as the ecosystems in which they are embedded. Within our framework, each episode of technology transition is characterized by the ecosystem emergence challenge that confronts the new technology and the ecosystem extension opportunity that is available to the old technology. We identify four qualitatively distinct regimes with clear predictions for the pace of substitution. Evidence from 10 episodes of technology transitions in the semiconductor lithography equipment industry from 1972 to 2009 offers strong support for our framework. We discuss the implication of our approach for firm strategy. Disciplines Management Sciences and Quantitative Methods This journal article is available at ScholarlyCommons: https://repository.upenn.edu/mgmt_papers/179 Innovation Ecosystems and the Pace of Substitution: Re-examining Technology S-curves Ron Adner Tuck School of Business, Dartmouth College Strategy and Management 100 Tuck Hall Hanover, NH 03755, USA Tel: 1 603 646 9185 Email:\t\r ron.adner@dartmouth.edu Rahul Kapoor The Wharton School University of Pennsylvania Philadelphia, PA-19104 Tel : 1 215 898 6458 Email: kapoorr@wharton.upenn.edu",
"title": ""
},
{
"docid": "neg:1840116_15",
"text": "A video from a moving camera produces different number of observations of different scene areas. We can construct an attention map of the scene by bringing the frames to a common reference and counting the number of frames that observed each scene point. Different representations can be constructed from this. The base of the attention map gives the scene mosaic. Super-resolved images of parts of the scene can be obtained using a subset of observations or video frames. We can combine mosaicing with super-resolution by using all observations, but the magnification factor will vary across the scene based on the attention received. The height of the attention map indicates the amount of super-resolution for that scene point. We modify the traditional super-resolution framework to generate a varying resolution image for panning cameras in this paper. The varying resolution image uses all useful data available in a video. We introduce the concept of attention-based super-resolution and give the modified framework for it. We also show its applicability on a few indoor and outdoor videos.",
"title": ""
},
{
"docid": "neg:1840116_16",
"text": "The aim of this chapter is to give an overview of domain adaptation and transfer learning with a specific view to visual applications. After a general motivation, we first position domain adaptation in the more general transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to the new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we review DA methods that go beyond image categorization, such as object detection, image segmentation, video analyses or learning visual attributes. We conclude the chapter with a section where we relate domain adaptation to other machine learning solutions.",
"title": ""
},
{
"docid": "neg:1840116_17",
"text": "Dual control frameworks for systems subject to uncertainties aim at simultaneously learning the unknown parameters while controlling the system dynamics. We propose a robust dual model predictive control algorithm for systems with bounded uncertainty with application to soft landing control. The algorithm exploits a robust control invariant set to guarantee constraint enforcement in spite of the uncertainty, and a constrained estimation algorithm to guarantee admissible parameter estimates. The impact of the control input on parameter learning is accounted for by including in the cost function a reference input, which is designed online to provide persistent excitation. The reference input design problem is non-convex, and here is solved by a sequence of relaxed convex problems. The results of the proposed method in a soft-landing control application in transportation systems are shown.",
"title": ""
},
{
"docid": "neg:1840116_18",
"text": "Automatic expert assignment is a common problem encountered in both industry and academia. For example, for conference program chairs and journal editors, in order to collect \"good\" judgments for a paper, it is necessary for them to assign the paper to the most appropriate reviewers. Choosing appropriate reviewers of course includes a number of considerations such as expertise and authority, but also diversity and avoiding conflicts. In this paper, we explore the expert retrieval problem and implement an automatic paper-reviewer recommendation system that considers aspects of expertise, authority, and diversity. In particular, a graph is first constructed on the possible reviewers and the query paper, incorporating expertise and authority information. Then a Random Walk with Restart (RWR) [1] model is employed on the graph with a sparsity constraint, incorporating diversity information. Extensive experiments on two reviewer recommendation benchmark datasets show that the proposed method obtains performance gains over state-of-the-art reviewer recommendation systems in terms of expertise, authority, diversity, and, most importantly, relevance as judged by human experts.",
"title": ""
}
] |
1840117 | Generation Alpha at the Intersection of Technology, Play and Motivation | [
{
"docid": "pos:1840117_0",
"text": "This article reviews the literature concerning the introduction of interactive whiteboards (IWBs) in educational settings. It identifies common themes to emerge from a burgeoning and diverse literature, which includes reports and summaries available on the Internet. Although the literature reviewed is overwhelmingly positive about the impact and the potential of IWBs, it is primarily based on the views of teachers and pupils. There is insufficient evidence to identify the actual impact of such technologies upon learning either in terms of classroom interaction or upon attainment and achievement. This article examines this issue in light of varying conceptions of interactivity and research into the effects of learning with verbal and visual information.",
"title": ""
},
{
"docid": "pos:1840117_1",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
},
{
"docid": "pos:1840117_2",
"text": "This is the translation of a paper by Marc Prensky, the originator of the famous metaphor digital natives digital immigrants. Here, ten years after the birth of that successful metaphor, Prensky outlines that, while the distinction between digital natives and immigrants will progressively become less important, new concepts will be needed to represent the continuous evolution of the relationship between man and digital technologies. In this paper Prensky introduces the concept of digital wisdom, a human quality which develops as a result of the empowerment that the natural human skills can receive through a creative and clever use of digital technologies. KEY-WORDS Digital natives, digital immigrants, digital wisdom, digital empowerment. Prensky M. (2010). H. Sapiens Digitale: dagli Immigrati digitali e nativi digitali alla saggezza digitale. TD-Tecnologie Didattiche, 50, pp. 17-24 17 I problemi del mondo d’oggi non possono essere risolti facendo ricorso allo stesso tipo di pensiero che li ha creati",
"title": ""
}
] | [
{
"docid": "neg:1840117_0",
"text": "Currently, the number of surveillance cameras is rapidly increasing responding to security issues. But constructing an intelligent detection system is not easy because it needs high computing performance. This study aims to construct a real-world video surveillance system that can effectively detect moving person using limited resources. To this end, we propose a simple framework to detect and recognize moving objects using outdoor CCTV video footages by combining background subtraction and Convolutional Neural Networks (CNNs). A background subtraction algorithm is first applied to each video frame to find the regions of interest (ROIs). A CNN classification is then carried out to classify the obtained ROIs into one of the predefined classes. Our approach much reduces the computation complexity in comparison to other object detection algorithms. For the experiments, new datasets are constructed by filming alleys and playgrounds, places where crimes are likely to occur. Different image sizes and experimental settings are tested to construct the best classifier for detecting people. The best classification accuracy of 0.85 was obtained for a test set from the same camera with training set and 0.82 with different cameras.",
"title": ""
},
{
"docid": "neg:1840117_1",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
},
{
"docid": "neg:1840117_2",
"text": "Sobolevicanthus transvaalensis n.sp. is described from the Cape Teal, Anas capensis Gmelin, 1789, collected in the Republic of South Africa. The new species possesses 8 skrjabinoid hooks 78–88 μm long (mean 85 μm) and a short claviform cirrus-sac 79–143 μm long and resembles S. javanensis (Davis, 1945) and S. terraereginae (Johnston, 1913). It can be distinguished from S. javanensis by its shorter cirrus-sac and smaller cirrus diameter, and by differences in the morphology of the accessory sac and vagina and in their position relative to the cirrus-sac. It can be separated from S. terraereginae on the basis of cirrus length and diameter. The basal diameter of the cirrus in S. terraereginae is three times that in S. transvaalensis. ac]19830414",
"title": ""
},
{
"docid": "neg:1840117_3",
"text": "Software Development Life Cycle (SDLC) is a process consisting of various phases like requirements analysis, designing, coding, testing and implementation & maintenance of a software system as well as the way, in which these phases are implemented. Research studies reveal that the initial two phases, viz. requirements and design are the skeleton of the entire development life cycle. Designing has several sub-activities such as Architectural, Function-Oriented and Object- Oriented design, which aim to transform the requirements into detailed specifications covering all facets of the system in a proper way, but at the same time, there exists various related challenges too. One of the foremost challenges is the minimum interaction between construction and design teams causing numerous problems during design such as: production delays, incomplete designs, rework, change orders, etc. Prior research studies reveal that Artificial Intelligence (AI) techniques may eliminate these problems by offering several tools/techniques to automate certain processes up to a certain extent. In this paper, our major aim is to identify the challenges in each of the stages of the design phase and possibility of AI techniques to overcome these identified issues. In addition, the paper also explores the relationship between these issues and their possible AI solution/s through Venn-Diagram. For some of the issues, there exist more than one AI techniques but for some issues, no AI technique/s have been found to overcome the same and accordingly, those issues are still open for further research.",
"title": ""
},
{
"docid": "neg:1840117_4",
"text": "This paper proposes a motion-focusing method to extract key frames and generate summarization synchronously for surveillance videos. Within each pre-segmented video shot, the proposed method focuses on one constant-speed motion and aligns the video frames by fixing this focused motion into a static situation. According to the relative motion theory, the other objects in the video are moving relatively to the selected kind of motion. This method finally generates a summary image containing all moving objects and embedded with spatial and motional information, together with key frames to provide details corresponding to the regions of interest in the summary image. We apply this method to the lane surveillance system and the results provide us a new way to understand the video efficiently.",
"title": ""
},
{
"docid": "neg:1840117_5",
"text": "The rise of cloud computing brings virtualization technology continues to heat up. Based on Xen's I/O virtualization subsystem, under the virtual machine environment which has multi-type tasks, the existing schedulers can't achieve response with I/O-bound tasks in time. This paper presents ECredit scheduler combined complexity evaluation of I/O task in Xen's I/O virtualization subsystem. It prioritizes the I/O-bound task realizing fair scheduling. The experiments show that the optimized scheduling algorithm can reduce the response time of I/O-bound task and improve the performance of the virtual system.",
"title": ""
},
{
"docid": "neg:1840117_6",
"text": "The design of tall buildings essentially involves a conceptual design, approximate analysis, preliminary design and optimization, to safely carry gravity and lateral loads. The design criteria are, strength, serviceability, stability and human comfort. The strength is satisfied by limit stresses, while serviceability is satisfied by drift limits in the range of H/500 to H/1000. Stability is satisfied by sufficient factor of safety against buckling and P-Delta effects. The factor of safety is around 1.67 to 1.92. The human comfort aspects are satisfied by accelerations in the range of 10 to 25 milli-g, where g=acceleration due to gravity, about 981cms/sec^2. The aim of the structural engineer is to arrive at suitable structural schemes, to satisfy these criteria, and assess their structural weights in weight/unit area in square feet or square meters. This initiates structural drawings and specifications to enable construction engineers to proceed with fabrication and erection operations. The weight of steel in lbs/sqft or in kg/sqm is often a parameter the architects and construction managers are looking for from the structural engineer. This includes the weights of floor system, girders, braces and columns. The premium for wind, is optimized to yield drifts in the range of H/500, where H is the height of the tall building. Herein, some aspects of the design of gravity system, and the lateral system, are explored. Preliminary design and optimization steps are illustrated with examples of actual tall buildings designed by CBM Engineers, Houston, Texas, with whom the author has been associated with during the past 3 decades. Dr.Joseph P.Colaco, its President, has been responsible for the tallest buildings in Los Angeles, Houston, St. Louis, Dallas, New Orleans, and Washington, D.C, and with the author in its design staff as a Senior Structural Engineer. Research in the development of approximate methods of analysis, and preliminary design and optimization, has been conducted at WPI, with several of the author’s graduate students. These are also illustrated. Software systems to do approximate analysis of shear-wall frame, framed-tube, out rigger braced tall buildings are illustrated. Advanced Design courses in reinforced and pre-stressed concrete, as well as structural steel design at WPI, use these systems. Research herein, was supported by grants from NSF, Bethlehem Steel, and Army.",
"title": ""
},
{
"docid": "neg:1840117_7",
"text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.",
"title": ""
},
{
"docid": "neg:1840117_8",
"text": "This paper postulates that water structure is altered by biomolecules as well as by disease-enabling entities such as certain solvated ions, and in turn water dynamics and structure affect the function of biomolecular interactions. Although the structural and dynamical alterations are subtle, they perturb a well-balanced system sufficiently to facilitate disease. We propose that the disruption of water dynamics between and within cells underlies many disease conditions. We survey recent advances in magnetobiology, nanobiology, and colloid and interface science that point compellingly to the crucial role played by the unique physical properties of quantum coherent nanomolecular clusters of magnetized water in enabling life at the cellular level by solving the “problems” of thermal diffusion, intracellular crowding, and molecular self-assembly. Interphase water and cellular surface tension, normally maintained by biological sulfates at membrane surfaces, are compromised by exogenous interfacial water stressors such as cationic aluminum, with consequences that include greater local water hydrophobicity, increased water tension, and interphase stretching. The ultimate result is greater “stiffness” in the extracellular matrix and either the “soft” cancerous state or the “soft” neurodegenerative state within cells. Our hypothesis provides a basis for understanding why so many idiopathic diseases of today are highly stereotyped and pluricausal. OPEN ACCESS Entropy 2013, 15 3823",
"title": ""
},
{
"docid": "neg:1840117_9",
"text": "We propose a method that allows an unskilled user to create an accurate physical replica of a digital 3D model. We use a projector/camera pair to scan a work in progress, and project multiple forms of guidance onto the object itself that indicate which areas need more material, which need less, and where any ridges, valleys or depth discontinuities are. The user adjusts the model using the guidance and iterates, making the shape of the physical object approach that of the target 3D model over time. We show how this approach can be used to create a duplicate of an existing object, by scanning the object and using that scan as the target shape. The user is free to make the reproduction at a different scale and out of different materials: we turn a toy car into cake. We extend the technique to support replicating a sequence of models to create stop-motion video. We demonstrate an end-to-end system in which real-world performance capture data is retargeted to claymation. Our approach allows users to easily and accurately create complex shapes, and naturally supports a large range of materials and model sizes.",
"title": ""
},
{
"docid": "neg:1840117_10",
"text": "This page is dedicated to design science research in Information Systems (IS). Design science research is yet another \"lens\" or set of synthetic and analytical techniques and perspectives (complementing the Positivist and Interpretive perspectives) for performing research in IS. Design science research involves the creation of new knowledge through design of novel or innovative artifacts (things or processes that have or can have material existence) and analysis of the use and/or performance of such artifacts along with reflection and abstraction—to improve and understand the behavior of aspects of Information Systems. Such artifacts include—but certainly are not limited to—algorithms (e.g. for information retrieval), human/computer interfaces, and system design methodologies or languages. Design science researchers can be found in many disciplines and fields, notably Engineering and Computer Science; they use a variety of approaches, methods and techniques. In Information Systems, following a number of years of a general shift in IS research away from technological to managerial and organizational issues, an increasing number of observers are calling for a return to an exploration of the \"IT\" that underlies all IS research (Orlikowski and Iacono, 2001) thus underlining the need for IS design science research.",
"title": ""
},
{
"docid": "neg:1840117_11",
"text": "This paper explores the combination of self-organizing map (SOM) and feedback, in order to represent sequences of inputs. In general, neural networks with time-delayed feedback represent time implicitly, by combining current inputs and past activities. It has been difficult to apply this approach to SOM, because feedback generates instability during learning. We demonstrate a solution to this problem, based on a nonlinearity. The result is a generalization of SOM that learns to represent sequences recursively. We demonstrate that the resulting representations are adapted to the temporal statistics of the input series.",
"title": ""
},
{
"docid": "neg:1840117_12",
"text": "This paper is about detecting incorrect arcs in a dependency parse for sentences that contain grammar mistakes. Pruning these arcs results in well-formed parse fragments that can still be useful for downstream applications. We propose two automatic methods that jointly parse the ungrammatical sentence and prune the incorrect arcs: a parser retrained on a parallel corpus of ungrammatical sentences with their corrections, and a sequence-to-sequence method. Experimental results show that the proposed strategies are promising for detecting incorrect syntactic dependencies as well as incorrect semantic dependencies.",
"title": ""
},
{
"docid": "neg:1840117_13",
"text": "Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable.",
"title": ""
},
{
"docid": "neg:1840117_14",
"text": "For autonomous driving, moving objects like vehicles and pedestrians are of critical importance as they primarily influence the maneuvering and braking of the car. Typically, they are detected by motion segmentation of dense optical flow augmented by a CNN based object detector for capturing semantics. In this paper, our aim is to jointly model motion and appearance cues in a single convolutional network. We propose a novel two-stream architecture for joint learning of object detection and motion segmentation. We designed three different flavors of our network to establish systematic comparison. It is shown that the joint training of tasks significantly improves accuracy compared to training them independently. Although motion segmentation has relatively fewer data than vehicle detection. The shared fusion encoder benefits from the joint training to learn a generalized representation. We created our own publicly available dataset (KITTI MOD) by extending KITTI object detection to obtain static/moving annotations on the vehicles. We compared against MPNet as a baseline, which is the current state of the art for CNN-based motion detection. It is shown that the proposed two-stream architecture improves the mAP score by 21.5% in KITTI MOD. We also evaluated our algorithm on the non-automotive DAVIS dataset and obtained accuracy close to the state-of-the-art performance. The proposed network runs at 8 fps on a Titan X GPU using a basic VGG16 encoder.",
"title": ""
},
{
"docid": "neg:1840117_15",
"text": "Existing code similarity comparison methods, whether source or binary code based, are mostly not resilient to obfuscations. In the case of software plagiarism, emerging obfuscation techniques have made automated detection increasingly difficult. In this paper, we propose a binary-oriented, obfuscation-resilient method based on a new concept, longest common subsequence of semantically equivalent basic blocks, which combines rigorous program semantics with longest common subsequence based fuzzy matching. We model the semantics of a basic block by a set of symbolic formulas representing the input-output relations of the block. This way, the semantics equivalence (and similarity) of two blocks can be checked by a theorem prover. We then model the semantics similarity of two paths using the longest common subsequence with basic blocks as elements. This novel combination has resulted in strong resiliency to code obfuscation. We have developed a prototype and our experimental results show that our method is effective and practical when applied to real-world software.",
"title": ""
},
{
"docid": "neg:1840117_16",
"text": "In this paper, we consider the problem of insufficient runtime and memory-space complexities of deep convolutional neural networks for visual emotion recognition. A survey of recent compression methods and efficient neural networks architectures is provided. We experimentally compare the computational speed and memory consumption during the training and the inference stages of such methods as the weights matrix decomposition, binarization and hashing. It is shown that the most efficient optimization can be achieved with the matrices decomposition and hashing. Finally, we explore the possibility to distill the knowledge from the large neural network, if only large unlabeled sample of facial images is available.",
"title": ""
},
{
"docid": "neg:1840117_17",
"text": "Now, we come to offer you the right catalogues of book to open. multisensor data fusion a review of the state of the art is one of the literary work in this world in suitable to be reading material. That's not only this book gives reference, but also it will show you the amazing benefits of reading a book. Developing your countless minds is needed; moreover you are kind of people with great curiosity. So, the book is very appropriate for you.",
"title": ""
},
{
"docid": "neg:1840117_18",
"text": "Recently, the speech recognition is very attractive for researchers because of the very significant related applications. For this reason, the novel research has been of very importance in the academic community. The aim of this work is to find out a new and appropriate feature extraction method for Arabic language recognition. In the present study, wavelet packet transform (WPT) with modular arithmetic and neural network were investigated for Arabic vowels recognition. The number of repeating the remainder was carried out for a speech signal. 266 coefficients are given to probabilistic neural network (PNN) for classification. The claimed results showed that the proposed method can make an effectual analysis with classification rates may reach 97%. Four published methods were studied for comparison. The proposed modular wavelet packet and neural networks (MWNN) expert system could obtain the best recognition rate. [Emad F. Khalaf, Khaled Daqrouq Ali Morfeq. Arabic Vowels Recognition by Modular Arithmetic and Wavelets using Neural Network. Life Sci J 2014;11(3):33-41]. (ISSN:1097-8135). http://www.lifesciencesite.com. 6",
"title": ""
},
{
"docid": "neg:1840117_19",
"text": "Conservation of genetic diversity, one of the three main forms of biodiversity, is a fundamental concern in conservation biology as it provides the raw material for evolutionary change and thus the potential to adapt to changing environments. By means of meta-analyses, we tested the generality of the hypotheses that habitat fragmentation affects genetic diversity of plant populations and that certain life history and ecological traits of plants can determine differential susceptibility to genetic erosion in fragmented habitats. Additionally, we assessed whether certain methodological approaches used by authors influence the ability to detect fragmentation effects on plant genetic diversity. We found overall large and negative effects of fragmentation on genetic diversity and outcrossing rates but no effects on inbreeding coefficients. Significant increases in inbreeding coefficient in fragmented habitats were only observed in studies analyzing progenies. The mating system and the rarity status of plants explained the highest proportion of variation in the effect sizes among species. The age of the fragment was also decisive in explaining variability among effect sizes: the larger the number of generations elapsed in fragmentation conditions, the larger the negative magnitude of effect sizes on heterozygosity. Our results also suggest that fragmentation is shifting mating patterns towards increased selfing. We conclude that current conservation efforts in fragmented habitats should be focused on common or recently rare species and mainly outcrossing species and outline important issues that need to be addressed in future research on this area.",
"title": ""
}
] |
1840118 | LBANN: livermore big artificial neural network HPC toolkit | [
{
"docid": "pos:1840118_0",
"text": "A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several offthe-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only singlelayer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”) between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance—so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyperparameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively). Appearing in Proceedings of the 14 International Conference on Artificial Intelligence and Statistics (AISTATS) 2011, Fort Lauderdale, FL, USA. Volume 15 of JMLR: W&CP 15. Copyright 2011 by the authors.",
"title": ""
}
] | [
{
"docid": "neg:1840118_0",
"text": "This paper investigates an application of mobile sensing: detection of potholes on roads. We describe a system and an associated algorithm to monitor the pothole conditions on the road. This system, that we call the Pothole Detection System, uses Accelerometer Sensor of Android smartphone for detection of potholes and GPS for plotting the location of potholes on Google Maps. Using a simple machine-learning approach, we show that we are able to identify the potholes from accelerometer data. The pothole detection algorithm detects the potholes in real-time. A runtime graph has been shown with the help of a charting software library ‘AChartEngine’. Accelerometer data and pothole data can be mailed to any email address in the form of a ‘.csv’ file. While designing the pothole detection algorithm we have assumed some threshold values on x-axis and z-axis. These threshold values are justified using a neural network technique which confirms an accuracy of 90%-95%. The neural network has been implemented using a machine learning framework available for Android called ‘Encog’. We evaluate our system on the outputs obtained using two, three and four wheelers. Keywords— Machine Learning, Context, Android, Neural Networks, Pothole, Sensor",
"title": ""
},
{
"docid": "neg:1840118_1",
"text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.",
"title": ""
},
{
"docid": "neg:1840118_2",
"text": "In this paper we present two deep-learning systems that competed at SemEval-2017 Task 4 “Sentiment Analysis in Twitter”. We participated in all subtasks for English tweets, involving message-level and topic-based sentiment polarity classification and quantification. We use Long Short-Term Memory (LSTM) networks augmented with two kinds of attention mechanisms, on top of word embeddings pre-trained on a big collection of Twitter messages. Also, we present a text processing tool suitable for social network messages, which performs tokenization, word normalization, segmentation and spell correction. Moreover, our approach uses no hand-crafted features or sentiment lexicons. We ranked 1st (tie) in Subtask A, and achieved very competitive results in the rest of the Subtasks. Both the word embeddings and our text processing tool1 are available to the research community.",
"title": ""
},
{
"docid": "neg:1840118_3",
"text": "In a Grid Connected Photo-voltaic System (GCPVS) maximum power is to be drawn from the PV array and has to be injected into the Grid, using suitable maximum power point tracking algorithms, converter topologies and control algorithms. Usually converter topologies such as buck, boost, buck-boost, sepic, flyback, push pull etc. are used. Loss factors such as irradiance, temperature, shading effects etc. have zero loss in a two stage system, but additional converter used will lead to an extra loss which makes the single stage system more efficient when compared to a two stage systems, in applications like standalone and grid connected renewable energy systems. In Cuk converter the source and load side are separated via a capacitor thus energy transfer from the source side to load side occurs through this capacitor which leads to less current ripples at the load side. Thus in this paper, a Simulink model of two stage GCPVS using Cuk converter is being designed, simulated and is compared with a GCPVS using Boost Converter. For tracking the maximum power point the most common and accurate method called incremental conductance algorithm is used. And the inverter control is done using the dc bus voltage algorithm.",
"title": ""
},
{
"docid": "neg:1840118_4",
"text": "The Drosophila melanogaster germ plasm has become the paradigm for understanding both the assembly of a specific cytoplasmic localization during oogenesis and its function. The posterior ooplasm is necessary and sufficient for the induction of germ cells. For its assembly, localization of gurken mRNA and its translation at the posterior pole of early oogenic stages is essential for establishing the posterior pole of the oocyte. Subsequently, oskar mRNA becomes localized to the posterior pole where its translation leads to the assembly of a functional germ plasm. Many gene products are required for producing the posterior polar plasm, but only oskar, tudor, valois, germcell-less and some noncoding RNAs are required for germ cell formation. A key feature of germ cell formation is the precocious segregation of germ cells, which isolates the primordial germ cells from mRNA turnover, new transcription, and continued cell division. nanos is critical for maintaining the transcription quiescent state and it is required to prevent transcription of Sex-lethal in pole cells. In spite of the large body of information about the formation and function of the Drosophila germ plasm, we still do not know what specifically is required to cause the pole cells to be germ cells. A series of unanswered problems is discussed in this chapter.",
"title": ""
},
{
"docid": "neg:1840118_5",
"text": "Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840118_6",
"text": "Convolutional neural net-like structures arise from training an unstructured deep belief network (DBN) using structured simulation data of 2-D Ising Models at criticality. The convolutional structure arises not just because such a structure is optimal for the task, but also because the belief network automatically engages in block renormalization procedures to “rescale” or “encode” the input, a fundamental approach in statistical mechanics. This work primarily reviews the work of Mehta et al. [1], the group that first made the discovery that such a phenomenon occurs, and replicates their results training a DBN on Ising models, confirming that weights in the DBN become spatially concentrated during training on critical Ising samples.",
"title": ""
},
{
"docid": "neg:1840118_7",
"text": "As the biomechanical literature concerning softball pitching is evolving, there are no data to support the mechanics of softball position players. Pitching literature supports the whole kinetic chain approach including the lower extremity in proper throwing mechanics. The purpose of this project was to examine the gluteal muscle group activation patterns and their relationship with shoulder and elbow kinematics and kinetics during the overhead throwing motion of softball position players. Eighteen Division I National Collegiate Athletic Association softball players (19.2 ± 1.0 years; 68.9 ± 8.7 kg; 168.6 ± 6.6 cm) who were listed on the active playing roster volunteered. Electromyographic, kinematic, and kinetic data were collected while players caught a simulated hit or pitched ball and perform their position throw. Pearson correlation revealed a significant negative correlation between non-throwing gluteus maximus during the phase of maximum external rotation to maximum internal rotation (MIR) and elbow moments at ball release (r = −0.52). While at ball release, trunk flexion and rotation both had a positive relationship with shoulder moments at MIR (r = 0.69, r = 0.82, respectively) suggesting that the kinematic actions of the pelvis and trunk are strongly related to the actions of the shoulder during throwing.",
"title": ""
},
{
"docid": "neg:1840118_8",
"text": "The Intelligent vehicle is experiencing revolutionary growth in research and industry, but it still suffers from a lot of security vulnerabilities. Traditional security methods are incapable of providing secure IV, mainly in terms of communication. In IV communication, major issues are trust and data accuracy of received and broadcasted reliable data in the communication channel. Blockchain technology works for the cryptocurrency, Bitcoin which has been recently used to build trust and reliability in peer-to-peer networks with similar topologies to IV Communication world. IV to IV, communicate in a decentralized manner within communication networks. In this paper, we have proposed, Trust Bit (TB) for IV communication among IVs using Blockchain technology. Our proposed trust bit provides surety for each IVs broadcasted data, to be secure and reliable in every particular networks. Our Trust Bit is a symbol of trustworthiness of vehicles behavior, and vehicles legal and illegal action. Our proposal also includes a reward system, which can exchange some TB among IVs, during successful communication. For the data management of this trust bit, we have used blockchain technology in the vehicular cloud, which can store all Trust bit details and can be accessed by IV anywhere and anytime. Our proposal provides secure and reliable information. We evaluate our proposal with the help of IV communication on intersection use case which analyzes a variety of trustworthiness between IVs during communication.",
"title": ""
},
{
"docid": "neg:1840118_9",
"text": "The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems--a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based and model-based collaborative filtering are the most successful technology for building recommender systems to date and is extensively used in many commercial recommender systems. The basic assumption in these algorithms is that there are sufficient historical data for measuring similarity between products or users. However, this assumption does not hold in various application domains such as electronics retail, home shopping network, on-line retail where new products are introduced and existing products disappear from the catalog. Another such application domains is home improvement retail industry where a lot of products (such as window treatments, bathroom, kitchen or deck) are custom made. Each product is unique and there are very little duplicate products. In this domain, the probability of the same exact two products bought together is close to zero. In this paper, we discuss the challenges of providing recommendation in the domains where no sufficient historical data exist for measuring similarity between products or users. We present feature-based recommendation algorithms that overcome the limitations of the existing top-n recommendation algorithms. The experimental evaluation of the proposed algorithms in the real life data sets shows a great promise. The pilot project deploying the proposed feature-based recommendation algorithms in the on-line retail web site shows 75% increase in the recommendation revenue for the first 2 month period.",
"title": ""
},
{
"docid": "neg:1840118_10",
"text": "Today service markets are becoming business reality as for example Amazon's EC2 spot market. However, current research focusses on simplified consumer-provider service markets only. Taxes are an important market element which has not been considered yet for service markets. This paper introduces and evaluates the effects of tax systems for IaaS markets which trade virtual machines. As a digital good with well defined characteristics like storage or processing power a virtual machine can be taxed by the tax authority using different tax systems. Currently the value added tax is widely used for taxing virtual machines only. The main contribution of the paper is the so called CloudTax component, a framework to simulate and evaluate different tax systems on service markets. It allows to introduce economical principles and phenomenons like the Laffer Curve or tax incidences. The CloudTax component is based on the CloudSim simulation framework using the Bazaar-Extension for comprehensive economic simulations. We show that tax mechanisms strongly influence the efficiency of negotiation processes in the Cloud market.",
"title": ""
},
{
"docid": "neg:1840118_11",
"text": "Two major projects in the U.S. and Europe have joined in a collaboration to work toward achieving interoperability among language resources. In the U.S., the project, Sustainable Interoperability for Language Technology (SILT) has been funded by the National Science Foundation under the INTEROP program, and in Europe, FLaReNet, Fostering Language Resources Network, has been funded by the European Commission under the eContentPlus framework. This international collaborative effort involves members of the language processing community and others working in related areas to build consensus regarding the sharing of data and technologies for language resources and applications, to work towards interoperability of existing data, and, where possible, to promote standards for annotation and resource building. This paper focuses on the results of a recent workshop whose goal was to arrive at operational definitions for interoperability over four thematic areas, including metadata for describing language resources, data categories and their semantics, resource publication requirements, and software sharing.",
"title": ""
},
{
"docid": "neg:1840118_12",
"text": "Type 2 diabetes mellitus (T2DM) is a chronic disease that oen results in multiple complications. Risk prediction and proling of T2DM complications is critical for healthcare professionals to design personalized treatment plans for patients in diabetes care for improved outcomes. In this paper, we study the risk of developing complications aer the initial T2DM diagnosis from longitudinal patient records. We propose a novel multi-task learning approach to simultaneously model multiple complications where each task corresponds to the risk modeling of one complication. Specically, the proposed method strategically captures the relationships (1) between the risks of multiple T2DM complications, (2) between the dierent risk factors, and (3) between the risk factor selection paerns. e method uses coecient shrinkage to identify an informative subset of risk factors from high-dimensional data, and uses a hierarchical Bayesian framework to allow domain knowledge to be incorporated as priors. e proposed method is favorable for healthcare applications because in additional to improved prediction performance, relationships among the dierent risks and risk factors are also identied. Extensive experimental results on a large electronic medical claims database show that the proposed method outperforms state-of-the-art models by a signicant margin. Furthermore, we show that the risk associations learned and the risk factors identied lead to meaningful clinical insights. CCS CONCEPTS •Information systems→ Data mining; •Applied computing → Health informatics;",
"title": ""
},
{
"docid": "neg:1840118_13",
"text": "In this article, a novel hybrid genetic algorithm is proposed. The selection operator, crossover operator and mutation operator of the genetic algorithm have effectively been improved according to features of Sudoku puzzles. The improved selection operator has impaired the similarity of the selected chromosome and optimal chromosome in the current population such that the chromosome with more abundant genes is more likely to participate in crossover; such a designed crossover operator has possessed dual effects of self-experience and population experience based on the concept of tactfully combining PSO, thereby making the whole iterative process highly directional; crossover probability is a random number and mutation probability changes along with the fitness value of the optimal solution in the current population such that more possibilities of crossover and mutation could then be considered during the algorithm iteration. The simulation results show that the convergence rate and stability of the novel algorithm has significantly been improved.",
"title": ""
},
{
"docid": "neg:1840118_14",
"text": "Work stealing is a promising approach to constructing multithreaded program runtimes of parallel programming languages. This paper presents HERMES, an energy-efficient work-stealing language runtime. The key insight is that threads in a work-stealing environment -- thieves and victims - have varying impacts on the overall program running time, and a coordination of their execution \"tempo\" can lead to energy efficiency with minimal performance loss. The centerpiece of HERMES is two complementary algorithms to coordinate thread tempo: the workpath-sensitive algorithm determines tempo for each thread based on thief-victim relationships on the execution path, whereas the workload-sensitive algorithm selects appropriate tempo based on the size of work-stealing deques. We construct HERMES on top of Intel Cilk Plus's runtime, and implement tempo adjustment through standard Dynamic Voltage and Frequency Scaling (DVFS). Benchmarks running on HERMES demonstrate an average of 11-12% energy savings with an average of 3-4% performance loss through meter-based measurements over commercial CPUs.",
"title": ""
},
{
"docid": "neg:1840118_15",
"text": "We propose a preprocessing method to improve the performance of Principal Component Analysis (PCA) for classification problems composed of two steps; in the first step, the weight of each feature is calculated by using a feature weighting method. Then the features with weights larger than a predefined threshold are selected. The selected relevant features are then subject to the second step. In the second step, variances of features are changed until the variances of the features are corresponded to their importance. By taking the advantage of step 2 to reveal the class structure, we expect that the performance of PCA increases in classification problems. Results confirm the effectiveness of our proposed methods.",
"title": ""
},
{
"docid": "neg:1840118_16",
"text": "Distributions are often used to model uncertainty in many scientific datasets. To preserve the correlation among the spatially sampled grid locations in the dataset, various standard multivariate distribution models have been proposed in visualization literature. These models treat each grid location as a univariate random variable which models the uncertainty at that location. Standard multivariate distributions (both parametric and nonparametric) assume that all the univariate marginals are of the same type/family of distribution. But in reality, different grid locations show different statistical behavior which may not be modeled best by the same type of distribution. In this paper, we propose a new multivariate uncertainty modeling strategy to address the needs of uncertainty modeling in scientific datasets. Our proposed method is based on a statistically sound multivariate technique called Copula, which makes it possible to separate the process of estimating the univariate marginals and the process of modeling dependency, unlike the standard multivariate distributions. The modeling flexibility offered by our proposed method makes it possible to design distribution fields which can have different types of distribution (Gaussian, Histogram, KDE etc.) at the grid locations, while maintaining the correlation structure at the same time. Depending on the results of various standard statistical tests, we can choose an optimal distribution representation at each location, resulting in a more cost efficient modeling without significantly sacrificing on the analysis quality. To demonstrate the efficacy of our proposed modeling strategy, we extract and visualize uncertain features like isocontours and vortices in various real world datasets. We also study various modeling criterion to help users in the task of univariate model selection.",
"title": ""
},
{
"docid": "neg:1840118_17",
"text": "In a cross-disciplinary study, we carried out an extensive literature review to increase understanding of vulnerability indicators used in the disciplines of earthquakeand flood vulnerability assessments. We provide insights into potential improvements in both fields by identifying and comparing quantitative vulnerability indicators grouped into physical and social categories. Next, a selection of indexand curve-based vulnerability models that use these indicators are described, comparing several characteristics such as temporal and spatial aspects. Earthquake vulnerability methods traditionally have a strong focus on object-based physical attributes used in vulnerability curve-based models, while flood vulnerability studies focus more on indicators applied to aggregated land-use classes in curve-based models. In assessing the differences and similarities between indicators used in earthquake and flood vulnerability models, we only include models that separately assess either of the two hazard types. Flood vulnerability studies could be improved using approaches from earthquake studies, such as developing object-based physical vulnerability curve assessments and incorporating time-of-the-day-based building occupation patterns. Likewise, earthquake assessments could learn from flood studies by refining their selection of social vulnerability indicators. Based on the lessons obtained in this study, we recommend future studies for exploring risk assessment methodologies across different hazard types.",
"title": ""
},
{
"docid": "neg:1840118_18",
"text": "Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.",
"title": ""
},
{
"docid": "neg:1840118_19",
"text": "We describe a framework for understanding how age-related changes in adult development affect work motivation, and, building on recent life-span theories and research on cognitive abilities, personality, affect, vocational interests, values, and self-concept, identify four intraindividual change trajectories (loss, gain, reorganization, and exchange). We discuss implications of the integrative framework for the use and effectiveness of different motivational strategies with midlife and older workers in a variety of jobs, as well as abiding issues and future research directions.",
"title": ""
}
] |
1840119 | Predicting Age Range of Users over Microblog Dataset | [
{
"docid": "pos:1840119_0",
"text": "Twitter sentiment analysis (TSA) has become a hot research topic in recent years. The goal of this task is to discover the attitude or opinion of the tweets, which is typically formulated as a machine learning based text classification problem. Some methods use manually labeled data to train fully supervised models, while others use some noisy labels, such as emoticons and hashtags, for model training. In general, we can only get a limited number of training data for the fully supervised models because it is very labor-intensive and time-consuming to manually label the tweets. As for the models with noisy labels, it is hard for them to achieve satisfactory performance due to the noise in the labels although it is easy to get a large amount of data for training. Hence, the best strategy is to utilize both manually labeled data and noisy labeled data for training. However, how to seamlessly integrate these two different kinds of data into the same learning framework is still a challenge. In this paper, we present a novel model, called emoticon smoothed language model (ESLAM), to handle this challenge. The basic idea is to train a language model based on the manually labeled data, and then use the noisy emoticon data for smoothing. Experiments on real data sets demonstrate that ESLAM can effectively integrate both kinds of data to outperform those methods using only one of them.",
"title": ""
}
] | [
{
"docid": "neg:1840119_0",
"text": "Out-of-vocabulary (OOV) words represent an important source of error in large vocabulary continuous speech recognition (LVCSR) systems. These words cause recognition failures, which propagate through pipeline systems impacting the performance of downstream applications. The detection of OOV regions in the output of a LVCSR system is typically addressed as a binary classification task, where each region is independently classified using local information. In this paper, we show that jointly predicting OOV regions, and including contextual information from each region, leads to substantial improvement in OOV detection. Compared to the state-of-the-art, we reduce the missed OOV rate from 42.6% to 28.4% at 10% false alarm rate.",
"title": ""
},
{
"docid": "neg:1840119_1",
"text": "γ-Aminobutyric acid (GABA) has high physiological activity in plant stress physiology. This study showed that the application of exogenous GABA by root drenching to moderately (MS, 150 mM salt concentration) and severely salt-stressed (SS, 300 mM salt concentration) plants significantly increased endogenous GABA concentration and improved maize seedling growth but decreased glutamate decarboxylase (GAD) activity compared with non-treated ones. Exogenous GABA alleviated damage to membranes, increased in proline and soluble sugar content in leaves, and reduced water loss. After the application of GABA, maize seedling leaves suffered less oxidative damage in terms of superoxide anion (O2·-) and malondialdehyde (MDA) content. GABA-treated MS and SS maize seedlings showed increased enzymatic antioxidant activity compared with that of untreated controls, and GABA-treated MS maize seedlings had a greater increase in enzymatic antioxidant activity than SS maize seedlings. Salt stress severely damaged cell function and inhibited photosynthesis, especially in SS maize seedlings. Exogenous GABA application could reduce the accumulation of harmful substances, help maintain cell morphology, and improve the function of cells during salt stress. These effects could reduce the damage to the photosynthetic system from salt stress and improve photosynthesis and chlorophyll fluorescence parameters. GABA enhanced the salt tolerance of maize seedlings.",
"title": ""
},
{
"docid": "neg:1840119_2",
"text": "This research addresses a challenging issue that is to recognize spoken Arabic letters, that are three letters of hijaiyah that have indentical pronounciation when pronounced by Indonesian speakers but actually has different makhraj in Arabic, the letters are sa, sya and tsa. The research uses Mel-Frequency Cepstral Coefficients (MFCC) based feature extraction and Artificial Neural Network (ANN) classification method. The result shows the proposed method obtain a good accuracy with an average acuracy is 92.42%, with recognition accuracy each letters (sa, sya, and tsa) prespectivly 92.38%, 93.26% and 91.63%.",
"title": ""
},
{
"docid": "neg:1840119_3",
"text": "Nitric oxide (NO) mediates activation of satellite precursor cells to enter the cell cycle. This provides new precursor cells for skeletal muscle growth and muscle repair from injury or disease. Targeting a new drug that specifically delivers NO to muscle has the potential to promote normal function and treat neuromuscular disease, and would also help to avoid side effects of NO from other treatment modalities. In this research, we examined the effectiveness of the NO donor, iosorbide dinitrate (ISDN), and a muscle relaxant, methocarbamol, in promoting satellite cell activation assayed by muscle cell DNA synthesis in normal adult mice. The work led to the development of guaifenesin dinitrate (GDN) as a new NO donor for delivering nitric oxide to muscle. The results revealed that there was a strong increase in muscle satellite cell activation and proliferation, demonstrated by a significant 38% rise in DNA synthesis after a single transdermal treatment with the new compound for 24 h. Western blot and immunohistochemistry analyses showed that the markers of satellite cell myogenesis, expression of myf5, myogenin, and follistatin, were increased after 24 h oral administration of the compound in adult mice. This research extends our understanding of the outcomes of NO-based treatments aimed at promoting muscle regeneration in normal tissue. The potential use of such treatment for conditions such as muscle atrophy in disuse and aging, and for the promotion of muscle tissue repair as required after injury or in neuromuscular diseases such as muscular dystrophy, is highlighted.",
"title": ""
},
{
"docid": "neg:1840119_4",
"text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.",
"title": ""
},
{
"docid": "neg:1840119_5",
"text": "Stretchable microelectromechanical systems (MEMS) possess higher mechanical deformability and adaptability than devices based on conventional solid and flexible substrates, hence they are particularly desirable for biomedical, optoelectronic, textile and other innovative applications. The stretchability performance can be evaluated by the failure strain of the embedded routing and the strain applied to the elastomeric substrate. The routings are divided into five forms according to their geometry: straight; wavy; wrinkly; island-bridge; and conductive-elastomeric. These designs are reviewed and their resistance-to-failure performance is investigated. The failure modeling, numerical analysis, and fabrication of routings are presented. The current review concludes with the essential factors of the stretchable electrical routing for achieving high performance, including routing angle, width and thickness. The future challenges of device integration and reliability assessment of the stretchable routings are addressed.",
"title": ""
},
{
"docid": "neg:1840119_6",
"text": "Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the ‘Story Cloze Test’. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of 50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.",
"title": ""
},
{
"docid": "neg:1840119_7",
"text": "Traditional summarization initiatives have been focused on specific types of documents such as articles, reviews, videos, image feeds, or tweets, a practice which may result in pigeonholing the summarization task in the context of modern, content-rich multimedia collections. Consequently, much of the research to date has revolved around mostly toy problems in narrow domains and working on single-source media types. We argue that summarization and story generation systems need to refocus the problem space in order to meet the information needs in the age of user-generated content in di↵erent formats and languages. Here we create a framework for flexible multimedia storytelling. Narratives, stories, and summaries carry a set of challenges in big data and dynamic multi-source media that give rise to new research in spatial-temporal representation, viewpoint generation, and explanation.",
"title": ""
},
{
"docid": "neg:1840119_8",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "neg:1840119_9",
"text": "Millimeter wave (mmWave) systems must overcome heavy signal attenuation to support high-throughput wireless communication links. The small wavelength in mmWave systems enables beamforming using large antenna arrays to combat path loss with directional transmission. Beamforming with multiple data streams, known as precoding, can be used to achieve even higher performance. Both beamforming and precoding are done at baseband in traditional microwave systems. In mmWave systems, however, the high cost of mixed-signal and radio frequency chains (RF) makes operating in the passband and analog domains attractive. This hardware limitation places additional constraints on precoder design. In this paper, we consider single user beamforming and precoding in mmWave systems with large arrays. We exploit the structure of mmWave channels to formulate the precoder design problem as a sparsity constrained least squares problem. Using the principle of basis pursuit, we develop a precoding algorithm that approximates the optimal unconstrained precoder using a low dimensional basis representation that can be efficiently implemented in RF hardware. We present numerical results on the performance of the proposed algorithm and show that it allows mmWave systems to approach waterfilling capacity.",
"title": ""
},
{
"docid": "neg:1840119_10",
"text": "This paper discusses the importance, the complexity and the challenges of mapping mobile robot’s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.",
"title": ""
},
{
"docid": "neg:1840119_11",
"text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup",
"title": ""
},
{
"docid": "neg:1840119_12",
"text": "Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin’s role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student’s predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.",
"title": ""
},
{
"docid": "neg:1840119_13",
"text": "Equation (1.1) expresses v0 as a convex combination of the neighbouring points v1, . . . , vk. In the simplest case k = 3, the weights λ1, λ2, λ3 are uniquely determined by (1.1) and (1.2) alone; they are the barycentric coordinates of v0 with respect to the triangle [v1, v2, v3], and they are positive. This motivates calling any set of non-negative weights satisfying (1.1–1.2) for general k, a set of coordinates for v0 with respect to v1, . . . , vk. There has long been an interest in generalizing barycentric coordinates to k-sided polygons with a view to possible multisided extensions of Bézier surfaces; see for example [8 ]. In this setting, one would normally be free to choose v1, . . . , vk to form a convex polygon but would need to allow v0 to be any point inside the polygon or on the polygon, i.e. on an edge or equal to a vertex. More recently, the need for such coordinates arose in methods for parameterization [2 ] and morphing [5 ], [6 ] of triangulations. Here the points v0, v1, . . . , vk will be vertices of a (planar) triangulation and so the point v0 will never lie on an edge of the polygon formed by v1, . . . , vk. If we require no particular properties of the coordinates, the problem is easily solved. Because v0 lies in the convex hull of v1, . . . , vk, there must exist at least one triangle T = [vi1 , vi2 , vi3 ] which contains v0, and so we can take λi1 , λi2 , λi3 to be the three barycentric coordinates of v0 with respect to T , and make the remaining coordinates zero. However, these coordinates depend randomly on the choice of triangle. An improvement is to take an average of such coordinates over certain covering triangles, as proposed in [2 ]. The resulting coordinates depend continuously on v0, v1, . . . , vk, yet still not smoothly. The",
"title": ""
},
{
"docid": "neg:1840119_14",
"text": "Time series classification is an increasing research topic due to the vast amount of time series data that is being created over a wide variety of fields. The particularity of the data makes it a challenging task and different approaches have been taken, including the distance based approach. 1-NN has been a widely used method within distance based time series classification due to its simplicity but still good performance. However, its supremacy may be attributed to being able to use specific distances for time series within the classification process and not to the classifier itself. With the aim of exploiting these distances within more complex classifiers, new approaches have arisen in the past few years that are competitive or which outperform the 1-NN based approaches. In some cases, these new methods use the distance measure to transform the series into feature vectors, bridging the gap between time series and traditional classifiers. In other cases, the distances are employed to obtain a time series kernel and enable the use of kernel methods for time series classification. One of the main challenges is that a kernel function must be positive semi-definite, a matter that is also addressed within this review. The presented review includes a taxonomy of all those methods that aim to classify time series using a distance based approach, as well as a discussion of the strengths and weaknesses of each method.",
"title": ""
},
{
"docid": "neg:1840119_15",
"text": "We present Confidence-Based Autonomy (CBA), an interactive algorithm for policy learning from demonstration. The CBA algorithm consists of two components which take advantage of the complimentary abilities of humans and computer agents. The first component, Confident Execution, enables the agent to identify states in which demonstration is required, to request a demonstration from the human teacher and to learn a policy based on the acquired data. The algorithm selects demonstrations based on a measure of action selection confidence, and our results show that using Confident Execution the agent requires fewer demonstrations to learn the policy than when demonstrations are selected by a human teacher. The second algorithmic component, Corrective Demonstration, enables the teacher to correct any mistakes made by the agent through additional demonstrations in order to improve the policy and future task performance. CBA and its individual components are compared and evaluated in a complex simulated driving domain. The complete CBA algorithm results in the best overall learning performance, successfully reproducing the behavior of the teacher while balancing the tradeoff between number of demonstrations and number of incorrect actions during learning.",
"title": ""
},
{
"docid": "neg:1840119_16",
"text": "The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument (e.g., driver-less cars) face is how to ensure that these instruments will not engage in unethical conduct (not to be conflated with illegal conduct). The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.",
"title": ""
},
{
"docid": "neg:1840119_17",
"text": "Web-based programming exercises are a useful way for students to practice and master essential concepts and techniques presented in introductory programming courses. Although these systems are used fairly widely, we have a limited understanding of how students use these systems, and what can be learned from the data collected by these systems.\n In this paper, we perform a preliminary exploratory analysis of data collected by the CloudCoder programming exercise system from five introductory courses taught in two programming languages across three colleges and universities. We explore a number of interesting correlations in the data that confirm existing hypotheses. Finally, and perhaps most importantly, we demonstrate the effectiveness and future potential of systems like CloudCoder to help us study novice programmers.",
"title": ""
},
{
"docid": "neg:1840119_18",
"text": "In recent years, sustainability has been a major focus of fashion business operations because fashion industry development causes harmful effects to the environment, both indirectly and directly. The sustainability of the fashion industry is generally based on several levels and this study focuses on investigating the optimal supplier selection problem for sustainable materials supply in fashion clothing production. Following the ground rule that sustainable development is based on the Triple Bottom Line (TBL), this paper has framed twelve criteria from the economic, environmental and social perspectives for evaluating suppliers. The well-established multi-criteria decision making tool Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is employed for ranking potential suppliers among the pool of suppliers. Through a real case study, the proposed approach has been applied and some managerial implications are derived.",
"title": ""
},
{
"docid": "neg:1840119_19",
"text": "RoadGraph is a graph based environmental model for driver assistance systems. It integrates information from different sources like digital maps, onboard sensors and V2X communication into one single model about vehicle's environment. At the moment of information aggregation some function independent situation analysis is done. In this paper the concept of the RoadGraph is described in detail and first results are shown.",
"title": ""
}
] |
1840120 | Pke: an Open Source Python-based Keyphrase Extraction Toolkit | [
{
"docid": "pos:1840120_0",
"text": "Keyphrase extraction is the task of identifying single or multi-word expressions that represent the main topics of a document. In this paper we present TopicRank, a graph-based keyphrase extraction method that relies on a topical representation of the document. Candidate keyphrases are clustered into topics and used as vertices in a complete graph. A graph-based ranking model is applied to assign a significance score to each topic. Keyphrases are then generated by selecting a candidate from each of the topranked topics. We conducted experiments on four evaluation datasets of different languages and domains. Results show that TopicRank significantly outperforms state-of-the-art methods on three datasets.",
"title": ""
}
] | [
{
"docid": "neg:1840120_0",
"text": "Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5–15 kg, medium carcasses 15.1–30 kg, medium/large carcasses 35–50 kg, large carcasses 55–70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.",
"title": ""
},
{
"docid": "neg:1840120_1",
"text": "Significant vulnerabilities have recently been identified in collaborative filtering recommender systems. These vulnerabilities mostly emanate from the open nature of such systems and their reliance on userspecified judgments for building profiles. Attackers can easily introduce biased data in an attempt to force the system to “adapt” in a manner advantageous to them. Our research in secure personalization is examining a range of attack models, from the simple to the complex, and a variety of recommendation techniques. In this chapter, we explore an attack model that focuses on a subset of users with similar tastes and show that such an attack can be highly successful against both user-based and item-based collaborative filtering. We also introduce a detection model that can significantly decrease the impact of this attack.",
"title": ""
},
{
"docid": "neg:1840120_2",
"text": "A correlational study examined relationships between motivational orientation, self-regulated learning, and classroom academic performance for 173 seventh graders from eight science and seven English classes. A self-report measure of student self-efficacy, intrinsic value, test anxiety, self-regulation, and use of learning strategies was administered, and performance data were obtained from work on classroom assignments. Self-efficacy and intrinsic value were positively related to cognitive engagement and performance. Regression analyses revealed that, depending on the outcome measure, self-regulation, self-efficacy, and test anxiety emerged as the best predictors of performance. Intrinsic value did not have a direct influence on performance but was strongly related to self-regulation and cognitive strategy use, regardless of prior achievement level. The implications of individual differences in motivational orientation for cognitive engagement and self-regulation in the classroom are discussed.",
"title": ""
},
{
"docid": "neg:1840120_3",
"text": "Mutations of SALL1 related to spalt of Drosophila have been found to cause Townes-Brocks syndrome, suggesting a function of SALL1 for the development of anus, limbs, ears, and kidneys. No function is yet known for SALL2, another human spalt-like gene. The structure of SALL2 is different from SALL1 and all other vertebrate spalt-like genes described in mouse, Xenopus, and Medaka, suggesting that SALL2-like genes might also exist in other vertebrates. Consistent with this hypothesis, we isolated and characterized a SALL2 homologous mouse gene, Msal-2. In contrast to other vertebrate spalt-like genes both SALL2 and Msal-2 encode only three double zinc finger domains, the most carboxyterminal of which only distantly resembles spalt-like zinc fingers. The evolutionary conservation of SALL2/Msal-2 suggests that two lines of sal-like genes with presumably different functions arose from an early evolutionary duplication of a common ancestor gene. Msal-2 is expressed throughout embryonic development but also in adult tissues, predominantly in brain. However, the function of SALL2/Msal-2 still needs to be determined.",
"title": ""
},
{
"docid": "neg:1840120_4",
"text": "The neuropeptide calcitonin gene-related peptide (CGRP) is implicated in the underlying pathology of migraine by promoting the development of a sensitized state of primary and secondary nociceptive neurons. The ability of CGRP to initiate and maintain peripheral and central sensitization is mediated by modulation of neuronal, glial, and immune cells in the trigeminal nociceptive signaling pathway. There is accumulating evidence to support a key role of CGRP in promoting cross excitation within the trigeminal ganglion that may help to explain the high co-morbidity of migraine with rhinosinusitis and temporomandibular joint disorder. In addition, there is emerging evidence that CGRP facilitates and sustains a hyperresponsive neuronal state in migraineurs mediated by reported risk factors such as stress and anxiety. In this review, the significant role of CGRP as a modulator of the trigeminal system will be discussed to provide a better understanding of the underlying pathology associated with the migraine phenotype.",
"title": ""
},
{
"docid": "neg:1840120_5",
"text": "Action selection is a fundamental decision process for us, and depends on the state of both our body and the environment. Because signals in our sensory and motor systems are corrupted by variability or noise, the nervous system needs to estimate these states. To select an optimal action these state estimates need to be combined with knowledge of the potential costs or rewards of different action outcomes. We review recent studies that have investigated the mechanisms used by the nervous system to solve such estimation and decision problems, which show that human behaviour is close to that predicted by Bayesian Decision Theory. This theory defines optimal behaviour in a world characterized by uncertainty, and provides a coherent way of describing sensorimotor processes.",
"title": ""
},
{
"docid": "neg:1840120_6",
"text": "The capability to operate cloud-native applications can generate enormous business growth and value. But enterprise architects should be aware that cloud-native applications are vulnerable to vendor lock-in. We investigated cloud-native application design principles, public cloud service providers, and industrial cloud standards. All results indicate that most cloud service categories seem to foster vendor lock-in situations which might be especially problematic for enterprise architectures. This might sound disillusioning at first. However, we present a reference model for cloud-native applications that relies only on a small subset of well standardized IaaS services. The reference model can be used for codifying cloud technologies. It can guide technology identification, classification, adoption, research and development processes for cloud-native application and for vendor lock-in aware enterprise architecture engineering methodologies.",
"title": ""
},
{
"docid": "neg:1840120_7",
"text": "The restoration of endodontic tooth is always a challenge for the clinician, not only due to excessive loss of tooth structure but also invasion of the biological width due to large decayed lesions. In this paper, the 7 most common clinical scenarios in molars with class II lesions ever deeper were examined. This includes both the type of restoration (direct or indirect) and the management of the cavity margin, such as the need for deep margin elevation (DME) or crown lengthening. It is necessary to have the DME when the healthy tooth remnant is in the sulcus or at the epithelium level. For caries that reaches the connective tissue or the bone crest, crown lengthening is required. Endocrowns are a good treatment option in the endodontically treated tooth when the loss of structure is advanced.",
"title": ""
},
{
"docid": "neg:1840120_8",
"text": "In medical diagnoses and treatments, e.g., endoscopy, dosage transition monitoring, it is often desirable to wirelessly track an object that moves through the human GI tract. In this paper, we propose a magnetic localization and orientation system for such applications. This system uses a small magnet enclosed in the object to serve as excitation source, so it does not require the connection wire and power supply for the excitation signal. When the magnet moves, it establishes a static magnetic field around, whose intensity is related to the magnet's position and orientation. With the magnetic sensors, the magnetic intensities in some predetermined spatial positions can be detected, and the magnet's position and orientation parameters can be computed based on an appropriate algorithm. Here, we propose a real-time tracking system developed by a cubic magnetic sensor array made of Honeywell 3-axis magnetic sensors, HMC1043. Using some efficient software modules and calibration methods, the system can achieve satisfactory tracking accuracy if the cubic sensor array has enough number of 3-axis magnetic sensors. The experimental results show that the average localization error is 1.8 mm.",
"title": ""
},
{
"docid": "neg:1840120_9",
"text": "Decision trees and random forests are common classifiers with widespread use. In this paper, we develop two protocols for privately evaluating decision trees and random forests. We operate in the standard two-party setting where the server holds a model (either a tree or a forest), and the client holds an input (a feature vector). At the conclusion of the protocol, the client learns only the model’s output on its input and a few generic parameters concerning the model; the server learns nothing. The first protocol we develop provides security against semi-honest adversaries. Next, we show an extension of the semi-honest protocol that obtains one-sided security against malicious adversaries. We implement both protocols and show that both variants are able to process trees with several hundred decision nodes in just a few seconds and a modest amount of bandwidth. Compared to previous semi-honest protocols for private decision tree evaluation, we demonstrate tenfold improvements in computation and bandwidth.",
"title": ""
},
{
"docid": "neg:1840120_10",
"text": "A classical heuristic in software testing is to reward diversity, which implies that a higher priority must be assigned to test cases that differ the most from those already prioritized. This approach is commonly known as similarity-based test prioritization (SBTP) and can be realized using a variety of techniques. The objective of our study is to investigate whether SBTP is more effective at finding defects than random permutation, as well as determine which SBTP implementations lead to better results. To achieve our objective, we implemented five different techniques from the literature and conducted an experiment using the defects4j dataset, which contains 395 real faults from six real-world open-source Java programs. Findings indicate that running the most dissimilar test cases early in the process is largely more effective than random permutation (Vargha–Delaney A [VDA]: 0.76–0.99 observed using normalized compression distance). No technique was found to be superior with respect to the effectiveness. Locality-sensitive hashing was, to a small extent, less effective than other SBTP techniques (VDA: 0.38 observed in comparison to normalized compression distance), but its speed largely outperformed the other techniques (i.e., it was approximately 5–111 times faster). Our results bring to mind the well-known adage, “don’t put all your eggs in one basket”. To effectively consume a limited testing budget, one should spread it evenly across different parts of the system by running the most dissimilar test cases early in the testing process.",
"title": ""
},
{
"docid": "neg:1840120_11",
"text": "Lifting is a common manual material handling task performed in the workplaces. It is considered as one of the main risk factors for Work-related Musculoskeletal Disorders. To improve work place safety, it is necessary to assess musculoskeletal and biomechanical risk exposures associated with these tasks, which requires very accurate 3D pose. Existing approaches mainly utilize marker-based sensors to collect 3D information. However, these methods are usually expensive to setup, timeconsuming in process, and sensitive to the surrounding environment. In this study, we propose a multi-view based deep perceptron approach to address aforementioned limitations. Our approach consists of two modules: a \"view-specific perceptron\" network extracts rich information independently from the image of view, which includes both 2D shape and hierarchical texture information; while a \"multi-view integration\" network synthesizes information from all available views to predict accurate 3D pose. To fully evaluate our approach, we carried out comprehensive experiments to compare different variants of our design. The results prove that our approach achieves comparable performance with former marker-based methods, i.e. an average error of 14:72 ± 2:96 mm on the lifting dataset. The results are also compared with state-of-the-art methods on HumanEva- I dataset [1], which demonstrates the superior performance of our approach.",
"title": ""
},
{
"docid": "neg:1840120_12",
"text": "Examinations are the most crucial section of any educational system. They are intended to measure student's knowledge, skills and aptitude. At any institute, a great deal of manual effort is required to plan and arrange examination. It includes making seating arrangement for students as well as supervision duty chart for invigilators. Many institutes performs this task manually using excel sheets. This results in excessive wastage of time and manpower. Automating the entire system can help solve the stated problem efficiently saving a lot of time. This paper presents the automatic exam seating allocation. It works in two modules First as, Students Seating Arrangement (SSA) and second as, Supervision Duties Allocation (SDA). It assigns the classrooms and the duties to the teachers in any institution. An input-output data is obtained from the real system which is found out manually by the organizers who set up the seating arrangement and chalk out the supervision duties. The results obtained using the real system and these two models are compared. The application shows that the modules are highly efficient, low-cost, and can be widely used in various colleges and universities.",
"title": ""
},
{
"docid": "neg:1840120_13",
"text": "The popularity of FPGAs is rapidly growing due to the unique advantages that they offer. However, their distinctive features also raise new questions concerning the security and communication capabilities of an FPGA-based hardware platform. In this paper, we explore the some of the limits of FPGA side-channel communication. Specifically, we identify a previously unexplored capability that significantly increases both the potential benefits and risks associated with side-channel communication on an FPGA: an in-device receiver. We designed and implemented three new communication mechanisms: speed modulation, timing modulation and pin hijacking. These non-traditional interfacing techniques have the potential to provide reliable communication with an estimated maximum bandwidth of 3.3 bit/sec, 8 Kbits/sec, and 3.4 Mbits/sec, respectively.",
"title": ""
},
{
"docid": "neg:1840120_14",
"text": "Chimeric antigen receptors (CARs) have been used to redirect the specificity of autologous T cells against leukemia and lymphoma with promising clinical results. Extending this approach to allogeneic T cells is problematic as they carry a significant risk of graft-versus-host disease (GVHD). Natural killer (NK) cells are highly cytotoxic effectors, killing their targets in a non-antigen-specific manner without causing GVHD. Cord blood (CB) offers an attractive, allogeneic, off-the-self source of NK cells for immunotherapy. We transduced CB-derived NK cells with a retroviral vector incorporating the genes for CAR-CD19, IL-15 and inducible caspase-9-based suicide gene (iC9), and demonstrated efficient killing of CD19-expressing cell lines and primary leukemia cells in vitro, with marked prolongation of survival in a xenograft Raji lymphoma murine model. Interleukin-15 (IL-15) production by the transduced CB-NK cells critically improved their function. Moreover, iC9/CAR.19/IL-15 CB-NK cells were readily eliminated upon pharmacologic activation of the iC9 suicide gene. In conclusion, we have developed a novel approach to immunotherapy using engineered CB-derived NK cells, which are easy to produce, exhibit striking efficacy and incorporate safety measures to limit toxicity. This approach should greatly improve the logistics of delivering this therapy to large numbers of patients, a major limitation to current CAR-T-cell therapies.",
"title": ""
},
{
"docid": "neg:1840120_15",
"text": "OBJECTIVE\nTo determine the current values and estimate the projected values (to the year 2041) for annual number of proximal femoral fractures (PFFs), age-adjusted rates of fracture, rates of death in the acute care setting, associated length of stay (LOS) in hospital, and seasonal variation by sex and age in elderly Canadians.\n\n\nDESIGN\nHospital discharge data for fiscal year 1993-94 from the Canadian Institute for Health Information were used to determine PFF incidence, and Statistics Canada population projections were used to estimate the rate and number of PFFs to 2041.\n\n\nSETTING\nCanada.\n\n\nPARTICIPANTS\nCanadian patients 65 years of age or older who underwent hip arthroplasty.\n\n\nOUTCOME MEASURES\nPFF rates, death rates and LOS by age, sex and province.\n\n\nRESULTS\nIn 1993-94 the incidence of PFF increased exponentially with increasing age. The age-adjusted rates were 479 per 100,000 for women and 187 per 100,000 for men. The number of PFFs was estimated at 23,375 (17,823 in women and 5552 in men), with a projected increase to 88,124 in 2041. The rate of death during the acute care stay increased exponentially with increasing age. The death rates for men were twice those for women. In 1993-94 an estimated 1570 deaths occurred in the acute care setting, and 7000 deaths were projected for 2041. LOS in the acute care setting increased with advancing age, as did variability in LOS, which suggests a more heterogeneous case mix with advancing age. The LOS for 1993-94 and 2041 was estimated at 465,000 and 1.8 million patient-days respectively. Seasonal variability in the incidence of PFFs by sex was not significant. Significant season-province interactions were seen (p < 0.05); however, the differences in incidence were small (on the order of 2% to 3%) and were not considered to have a large effect on resource use in the acute care setting.\n\n\nCONCLUSIONS\nOn the assumption that current conditions contributing to hip fractures will remain constant, the number of PFFs will rise exponentially over the next 40 years. The results of this study highlight the serious implications for Canadians if incidence rates are not reduced by some form of intervention.",
"title": ""
},
{
"docid": "neg:1840120_16",
"text": "Network Security is one of the important concepts in data security as the data to be uploaded should be made secure. To make data secure, there exist number of algorithms like AES (Advanced Encryption Standard), IDEA (International Data Encryption Algorithm) etc. These techniques of making the data secure come under Cryptography. Involving lnternet of Things (IoT) in Cryptography is an emerging domain. IoT can be defined as controlling things located at any part of the world via Internet. So, IoT involves data security i.e. Cryptography. Here, in this paper we discuss how data can be made secure for IoT using Cryptography.",
"title": ""
},
{
"docid": "neg:1840120_17",
"text": "Recent advances in computer vision technologies have made possible the development of intelligent monitoring systems for video surveillance and ambientassisted living. By using this technology, these systems are able to automatically interpret visual data from the environment and perform tasks that would have been unthinkable years ago. These achievements represent a radical improvement but they also suppose a new threat to individual’s privacy. The new capabilities of such systems give them the ability to collect and index a huge amount of private information about each individual. Next-generation systems have to solve this issue in order to obtain the users’ acceptance. Therefore, there is a need for mechanisms or tools to protect and preserve people’s privacy. This paper seeks to clarify how privacy can be protected in imagery data, so as a main contribution a comprehensive classification of the protection methods for visual privacy as well as an up-to-date review of them are provided. A survey of the existing privacy-aware intelligent monitoring systems and a valuable discussion of important aspects of visual privacy are also provided.",
"title": ""
},
{
"docid": "neg:1840120_18",
"text": "We present an image set classification algorithm based on unsupervised clustering of labeled training and unlabeled test data where labels are only used in the stopping criterion. The probability distribution of each class over the set of clusters is used to define a true set based similarity measure. To this end, we propose an iterative sparse spectral clustering algorithm. In each iteration, a proximity matrix is efficiently recomputed to better represent the local subspace structure. Initial clusters capture the global data structure and finer clusters at the later stages capture the subtle class differences not visible at the global scale. Image sets are compactly represented with multiple Grassmannian manifolds which are subsequently embedded in Euclidean space with the proposed spectral clustering algorithm. We also propose an efficient eigenvector solver which not only reduces the computational cost of spectral clustering by many folds but also improves the clustering quality and final classification results. Experiments on five standard datasets and comparison with seven existing techniques show the efficacy of our algorithm.",
"title": ""
},
{
"docid": "neg:1840120_19",
"text": "License plate recognition usually contains three steps, namely license plate detection/localization, character segmentation and character recognition. When reading characters on a license plate one by one after license plate detection step, it is crucial to accurately segment the characters. The segmentation step may be affected by many factors such as license plate boundaries (frames). The recognition accuracy will be significantly reduced if the characters are not properly segmented. This paper presents an efficient algorithm for character segmentation on a license plate. The algorithm follows the step that detects the license plates using an AdaBoost algorithm. It is based on an efficient and accurate skew and slant correction of license plates, and works together with boundary (frame) removal of license plates. The algorithm is efficient and can be applied in real-time applications. The experiments are performed to show the accuracy of segmentation.",
"title": ""
}
] |
1840121 | Machine learning, medical diagnosis, and biomedical engineering research - commentary | [
{
"docid": "pos:1840121_0",
"text": "Feature selection has been the focus of interest for quite some time and much work has been done. With the creation of huge databases and the consequent requirements for good machine learning techniques, new problems arise and novel approaches to feature selection are in demand. This survey is a comprehensive overview of many existing methods from the 1970’s to the present. It identifies four steps of a typical feature selection method, and categorizes the different existing methods in terms of generation procedures and evaluation functions, and reveals hitherto unattempted combinations of generation procedures and evaluation functions. Representative methods are chosen from each category for detailed explanation and discussion via example. Benchmark datasets with different characteristics are used for comparative study. The strengths and weaknesses of different methods are explained. Guidelines for applying feature selection methods are given based on data types and domain characteristics. This survey identifies the future research areas in feature selection, introduces newcomers to this field, and paves the way for practitioners who search for suitable methods for solving domain-specific real-world applications. (Intelligent Data Analysis, Vol. I, no. 3, http:llwwwelsevier.co&ocate/ida)",
"title": ""
}
] | [
{
"docid": "neg:1840121_0",
"text": "A set of tools is being prepared in the frame of ESA activity [18191/04/NL] labelled: \"Mars Rover Chassis Evaluation Tools\" to support design, selection and optimisation of space exploration rovers in Europe. This activity is carried out jointly by Contraves Space as prime contractor, EPFL, DLR, Surrey Space Centre and EADS Space Transportation. This paper describes the current results of this study and its intended used for selection, design and optimisation on different wheeled vehicles. These tools would also allow future developments for a more efficient motion control on rover. INTRODUCTION AND MOTIVATION A set of tools is being developed to support the design of planetary rovers in Europe. The RCET will enable accurate predictions and characterisations of rover performances as related to the locomotion subsystem. This infrastructure consists of both S/W and H/W elements that will be interwoven to result in a user-friendly environment. The actual need for mobility increased in terms of range and duration. In this respect, redesigning specific aspects of the past rover concepts, in particular the development of most suitable all terrain performances is appropriate [9]. Analysis and design methodologies for terrestrial surface vehicles to operate on unprepared surfaces have been successfully applied to planet rover developments for the first time during the Apollo LRV manned lunar rover programme of the late 1960’s and early 1970’s [1,2]. Key to this accomplishment and to rational surface vehicle designs in general are quantitative descriptions of the terrain and of the interaction between the terrain and the vehicle. Not only the wheel/ground interaction is essential for efficient locomotion, but also the rover kinematics concepts. In recent terrestrial off-the-road vehicle development and acquisition, especially in the military, the so-called ‘Virtual Proving Ground’ (VPG) Simulation Technology has become essential. The integrated environments previously available to design engineers involved sophisticated hardware and software and cost hundreds of thousands of Euros. The experimentation and operational costs associated with the use of such instruments were even more alarming. The promise of VPG is to lower the risk and cost in vehicle definition and design by allowing early concept characterisation and trade-off’s based on numerical models without having to rely on prototyping for concept assessment. A similar approach is proposed for future European planetary rover programmes and is to be enabled by RCET. The first part of this paper describes the methodology used in the RCET activity and gives an overview of the different tools under development. The next section details the theory and modules used for the simulation. Finally the last section relates the first results, the future work and concludes this paper. In Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation 'ASTRA 2004' ESTEC, Noordwijk, The Netherlands, November 2 4, 2004",
"title": ""
},
{
"docid": "neg:1840121_1",
"text": "We present Residual Policy Learning (RPL): a simple method for improving nondifferentiable policies using model-free deep reinforcement learning. RPL thrives in complex robotic manipulation tasks where good but imperfect controllers are available. In these tasks, reinforcement learning from scratch remains data-inefficient or intractable, but learning a residual on top of the initial controller can yield substantial improvement. We study RPL in five challenging MuJoCo tasks involving partial observability, sensor noise, model misspecification, and controller miscalibration. By combining learning with control algorithms, RPL can perform long-horizon, sparse-reward tasks for which reinforcement learning alone fails. Moreover, we find that RPL consistently and substantially improves on the initial controllers. We argue that RPL is a promising approach for combining the complementary strengths of deep reinforcement learning and robotic control, pushing the boundaries of what either can achieve independently.",
"title": ""
},
{
"docid": "neg:1840121_2",
"text": "Persistent increase in population of world is demanding more and more supply of food. Hence there is a significant need of advancement in cultivation to meet up the future food needs. It is important to know moisture levels in soil to maximize the output. But most of farmers cannot afford high cost devices to measure soil moisture. Our research work in this paper focuses on home-made low cost moisture sensor with accuracy. In this paper we present a method to manufacture soil moisture sensor to estimate moisture content in soil hence by providing information about required water supply for good cultivation. This sensor is tested with several samples of soil and able to meet considerable accuracy. Measuring soil moisture is an effective way to determine condition of soil and get information about the quantity of water that need to be supplied for cultivation. Two separate methods are illustrated in this paper to determine soil moisture over an area and along the depth.",
"title": ""
},
{
"docid": "neg:1840121_3",
"text": "We present the design and implementation of a system which allows a standard paper-based exam to be graded via tablet computers. The paper exam is given normally in a course, with a specialized footer that allows for automated recognition of each exam page. The exam pages are then scanned in via a high-speed scanner, graded by one or more people using tablet computers, and returned electronically to the students. The system provides many advantages over regular paper-based exam grading, and boasts a faster grading experience than traditional grading methods.",
"title": ""
},
{
"docid": "neg:1840121_4",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "neg:1840121_5",
"text": "Low back pain (LBP) is a problem worldwide with a lifetime prevalence reported to be as high as 84%. The lifetime prevalence of low back pain is reported to be as high as 84%, and the prevalence of chronic low back pain is about 23%, with 11–12% of the population being disabled by low back pain [1]. LBP is defined as pain experienced between the twelfth rib and the inferior gluteal fold, with or without associated leg pain [2]. Based on the etiology LBP is classified as Specific Low Back Pain and Non-specific Low Back Pain. Of all the LBP patients 10% are attributed to Specific and 90% are attributed to NonSpecific Low Back Pain (NSLBP) [3]. Specific LBP are those back pains which have specific etiology causes like Sponylolisthesis, Spondylosis, Ankylosing Spondylitis, Prolapsed disc etc.",
"title": ""
},
{
"docid": "neg:1840121_6",
"text": "Triboelectric effect works on the principle of triboelectrification and electrostatic induction. This principle is used to generate voltage by converting mechanical energy into electrical energy. This paper presents the charging behavior of different capacitors by rubbing of two different materials using mechanical motion. The numerical and simulation modeling, describes the charging performance of a TENG with a bridge rectifier. It is also demonstrated that a 10 μF capacitor can be charged to a maximum of 24.04 volt in 300 seconds and it is also provide 2800 μJ/cm3 maximum energy density. Such system can be used for ultralow power electronic devices, biomedical devices and self-powered appliances etc.",
"title": ""
},
{
"docid": "neg:1840121_7",
"text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013",
"title": ""
},
{
"docid": "neg:1840121_8",
"text": "Flip chip assembly technology is an attractive solution for high I/O density and fine-pitch microelectronics packaging. Recently, high efficient GaN-based light-emitting diodes (LEDs) have undergone a rapid development and flip chip bonding has been widely applied to fabricate high-brightness GaN micro-LED arrays [1]. The flip chip GaN LED has some advantages over the traditional top-emission LED, including improved current spreading, higher light extraction efficiency, better thermal dissipation capability and the potential of further optical component integration [2, 3]. With the advantages of flip chip assembly, micro-LED (μLED) arrays with high I/O density can be performed with improved luminous efficiency than conventional p-side-up micro-LED arrays and are suitable for many potential applications, such as micro-displays, bio-photonics and visible light communications (VLC), etc. In particular, μLED array based selif-emissive micro-display has the promising to achieve high brightness and contrast, reliability, long-life and compactness, which conventional micro-displays like LCD, OLED, etc, cannot compete with. In this study, GaN micro-LED array device with flip chip assembly package process was presented. The bonding quality of flip chip high density micro-LED array is tested by daisy chain test. The p-n junction tests of the devices are measured for electrical characteristics. The illumination condition of each micro-diode pixel was examined under a forward bias. Failure mode analysis was performed using cross sectioning and scanning electron microscopy (SEM). Finally, the fully packaged micro-LED array device is demonstrated as a prototype of dice projector system.",
"title": ""
},
{
"docid": "neg:1840121_9",
"text": "This paper shows a Class-E RF power amplifier designed to obtain a flat-top transistor-voltage waveform whose peak value is 81% of the peak value of the voltage of a “Classical” Class-E amplifier.",
"title": ""
},
{
"docid": "neg:1840121_10",
"text": "Automated labeling of anatomical structures in medical images is very important in many neuroscience studies. Recently, patch-based labeling has been widely investigated to alleviate the possible mis-alignment when registering atlases to the target image. However, the weights used for label fusion from the registered atlases are generally computed independently and thus lack the capability of preventing the ambiguous atlas patches from contributing to the label fusion. More critically, these weights are often calculated based only on the simple patch similarity, thus not necessarily providing optimal solution for label fusion. To address these limitations, we propose a generative probability model to describe the procedure of label fusion in a multi-atlas scenario, for the goal of labeling each point in the target image by the best representative atlas patches that also have the largest labeling unanimity in labeling the underlying point correctly. Specifically, sparsity constraint is imposed upon label fusion weights, in order to select a small number of atlas patches that best represent the underlying target patch, thus reducing the risks of including the misleading atlas patches. The labeling unanimity among atlas patches is achieved by exploring their dependencies, where we model these dependencies as the joint probability of each pair of atlas patches in correctly predicting the labels, by analyzing the correlation of their morphological error patterns and also the labeling consensus among atlases. The patch dependencies will be further recursively updated based on the latest labeling results to correct the possible labeling errors, which falls to the Expectation Maximization (EM) framework. To demonstrate the labeling performance, we have comprehensively evaluated our patch-based labeling method on the whole brain parcellation and hippocampus segmentation. Promising labeling results have been achieved with comparison to the conventional patch-based labeling method, indicating the potential application of the proposed method in the future clinical studies.",
"title": ""
},
{
"docid": "neg:1840121_11",
"text": "With the growing volume of online information, recommender systems have been an effective strategy to overcome information overload. The utility of recommender systems cannot be overstated, given their widespread adoption in many web applications, along with their potential impact to ameliorate many problems related to over-choice. In recent years, deep learning has garnered considerable interest in many research fields such as computer vision and natural language processing, owing not only to stellar performance but also to the attractive property of learning feature representations from scratch. The influence of deep learning is also pervasive, recently demonstrating its effectiveness when applied to information retrieval and recommender systems research. The field of deep learning in recommender system is flourishing. This article aims to provide a comprehensive review of recent research efforts on deep learning-based recommender systems. More concretely, we provide and devise a taxonomy of deep learning-based recommendation models, along with a comprehensive summary of the state of the art. Finally, we expand on current trends and provide new perspectives pertaining to this new and exciting development of the field.",
"title": ""
},
{
"docid": "neg:1840121_12",
"text": "The critical period hypothesis for language acquisition (CP) proposes that the outcome of language acquisition is not uniform over the lifespan but rather is best during early childhood. The CP hypothesis was originally proposed for spoken language but recent research has shown that it applies equally to sign language. This paper summarizes a series of experiments designed to investigate whether and how the CP affects the outcome of sign language acquisition. The results show that the CP has robust effects on the development of sign language comprehension. Effects are found at all levels of linguistic structure (phonology, morphology and syntax, the lexicon and semantics) and are greater for first as compared to second language acquisition. In addition, CP effects have been found on all measures of language comprehension examined to date, namely, working memory, narrative comprehension, sentence memory and interpretation, and on-line grammatical processing. The nature of these effects with respect to a model of language comprehension is discussed.",
"title": ""
},
{
"docid": "neg:1840121_13",
"text": "BACKGROUND\nPsychosocial treatments are the mainstay of management of autism in the UK but there is a notable lack of a systematic evidence base for their effectiveness. Randomised controlled trial (RCT) studies in this area have been rare but are essential because of the developmental heterogeneity of the disorder. We aimed to test a new theoretically based social communication intervention targeting parental communication in a randomised design against routine care alone.\n\n\nMETHODS\nThe intervention was given in addition to existing care and involved regular monthly therapist contact for 6 months with a further 6 months of 2-monthly consolidation sessions. It aimed to educate parents and train them in adapted communication tailored to their child's individual competencies. Twenty-eight children with autism were randomised between this treatment and routine care alone, stratified for age and baseline severity. Outcome was measured at 12 months from commencement of intervention, using standardised instruments.\n\n\nRESULTS\nAll cases studied met full Autism Diagnostic Interview (ADI) criteria for classical autism. Treatment and controls had similar routine care during the study period and there were no study dropouts after treatment had started. The active treatment group showed significant improvement compared with controls on the primary outcome measure--Autism Diagnostic Observation Schedule (ADOS) total score, particularly in reciprocal social interaction--and on secondary measures of expressive language, communicative initiation and parent-child interaction. Suggestive but non-significant results were found in Vineland Adaptive Behaviour Scales (Communication Sub-domain) and ADOS stereotyped and restricted behaviour domain.\n\n\nCONCLUSIONS\nA Randomised Treatment Trial design of this kind in classical autism is feasible and acceptable to patients. This pilot study suggests significant additional treatment benefits following a targeted (but relatively non-intensive) dyadic social communication treatment, when compared with routine care. The study needs replication on larger and independent samples. It should encourage further RCT designs in this area.",
"title": ""
},
{
"docid": "neg:1840121_14",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "neg:1840121_15",
"text": "In this paper we propose a Deep Autoencoder Mixture Clustering (DAMIC) algorithm based on a mixture of deep autoencoders where each cluster is represented by an autoencoder. A clustering network transforms the data into another space and then selects one of the clusters. Next, the autoencoder associated with this cluster is used to reconstruct the data-point. The clustering algorithm jointly learns the nonlinear data representation and the set of autoencoders. The optimal clustering is found by minimizing the reconstruction loss of the mixture of autoencoder network. Unlike other deep clustering algorithms, no regularization term is needed to avoid data collapsing to a single point. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840121_16",
"text": "This paper presents the design of a modulated metasurface (MTS) antenna capable to provide both right-hand (RH) and left-hand (LH) circularly polarized (CP) boresight radiation at Ku-band (13.5 GHz). This antenna is based on the interaction of two cylindrical-wavefront surface wave (SW) modes of transverse electric (TE) and transverse magnetic (TM) types with a rotationally symmetric, anisotropic-modulated MTS placed on top of a grounded slab. A properly designed centered circular waveguide feed excites the two orthogonal (decoupled) SW modes and guarantees the balance of the power associated with each of them. By a proper selection of the anisotropy and modulation of the MTS pattern, the phase velocities of the two modes are synchronized, and leakage is generated in broadside direction with two orthogonal linear polarizations. When the circular waveguide is excited with two mutually orthogonal TE11 modes in phase-quadrature, an LHCP or RHCP antenna is obtained. This paper explains the feeding system and the MTS requirements that guarantee the balanced conditions of the TM/TE SWs and consequent generation of dual CP boresight radiation.",
"title": ""
},
{
"docid": "neg:1840121_17",
"text": "Agri-Food is the largest manufacturing sector in the UK. It supports a food chain that generates over {\\pounds}108bn p.a., with 3.9m employees in a truly international industry and exports {\\pounds}20bn of UK manufactured goods. However, the global food chain is under pressure from population growth, climate change, political pressures affecting migration, population drift from rural to urban regions and the demographics of an aging global population. These challenges are recognised in the UK Industrial Strategy white paper and backed by significant investment via a Wave 2 Industrial Challenge Fund Investment (\"Transforming Food Production: from Farm to Fork\"). Robotics and Autonomous Systems (RAS) and associated digital technologies are now seen as enablers of this critical food chain transformation. To meet these challenges, this white paper reviews the state of the art in the application of RAS in Agri-Food production and explores research and innovation needs to ensure these technologies reach their full potential and deliver the necessary impacts in the Agri-Food sector.",
"title": ""
},
{
"docid": "neg:1840121_18",
"text": "Bacteria and fungi are ubiquitous in the atmosphere. The diversity and abundance of airborne microbes may be strongly influenced by atmospheric conditions or even influence atmospheric conditions themselves by acting as ice nucleators. However, few comprehensive studies have described the diversity and dynamics of airborne bacteria and fungi based on culture-independent techniques. We document atmospheric microbial abundance, community composition, and ice nucleation at a high-elevation site in northwestern Colorado. We used a standard small-subunit rRNA gene Sanger sequencing approach for total microbial community analysis and a bacteria-specific 16S rRNA bar-coded pyrosequencing approach (4,864 sequences total). During the 2-week collection period, total microbial abundances were relatively constant, ranging from 9.6 x 10(5) to 6.6 x 10(6) cells m(-3) of air, and the diversity and composition of the airborne microbial communities were also relatively static. Bacteria and fungi were nearly equivalent, and members of the proteobacterial groups Burkholderiales and Moraxellaceae (particularly the genus Psychrobacter) were dominant. These taxa were not always the most abundant in freshly fallen snow samples collected at this site. Although there was minimal variability in microbial abundances and composition within the atmosphere, the number of biological ice nuclei increased significantly during periods of high relative humidity. However, these changes in ice nuclei numbers were not associated with changes in the relative abundances of the most commonly studied ice-nucleating bacteria.",
"title": ""
},
{
"docid": "neg:1840121_19",
"text": "This short paper outlines how polynomial chaos theory (PCT) can be utilized for manipulator dynamic analysis and controller design in a 4-DOF selective compliance assembly robot-arm-type manipulator with variation in both the link masses and payload. It includes a simple linear control algorithm into the formulation to show the capability of the PCT framework.",
"title": ""
}
] |
1840122 | A vision system for traffic sign detection and recognition | [
{
"docid": "pos:1840122_0",
"text": "Real-time detection of traffic signs, the task of pinpointing a traffic sign's location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traffic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the “German Traffic Sign Detection Benchmark” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Houghlike voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition.",
"title": ""
},
{
"docid": "pos:1840122_1",
"text": "In this paper, we propose a high-performance traffic sign recognition (TSR) framework to rapidly detect and recognize multiclass traffic signs in high-resolution images. This framework includes three parts: a novel region-of-interest (ROI) extraction method called the high-contrast region extraction (HCRE), the split-flow cascade tree detector (SFC-tree detector), and a rapid occlusion-robust traffic sign classification method based on the extended sparse representation classification (ESRC). Unlike the color-thresholding or extreme region extraction methods used by previous ROI methods, the ROI extraction method of the HCRE is designed to extract ROI with high local contrast, which can keep a good balance of the detection rate and the extraction rate. The SFC-tree detector can detect a large number of different types of traffic signs in high-resolution images quickly. The traffic sign classification method based on the ESRC is designed to classify traffic signs with partial occlusion. Instead of solving the sparse representation problem using an overcomplete dictionary, the classification method based on the ESRC utilizes a content dictionary and an occlusion dictionary to sparsely represent traffic signs, which can largely reduce the dictionary size in the occlusion-robust dictionaries and achieve high accuracy. The experiments demonstrate the advantage of the proposed approach, and our TSR framework can rapidly detect and recognize multiclass traffic signs with high accuracy.",
"title": ""
}
] | [
{
"docid": "neg:1840122_0",
"text": "BACKGROUND\nVitamin D is crucial for maintenance of musculoskeletal health, and might also have a role in extraskeletal tissues. Determinants of circulating 25-hydroxyvitamin D concentrations include sun exposure and diet, but high heritability suggests that genetic factors could also play a part. We aimed to identify common genetic variants affecting vitamin D concentrations and risk of insufficiency.\n\n\nMETHODS\nWe undertook a genome-wide association study of 25-hydroxyvitamin D concentrations in 33 996 individuals of European descent from 15 cohorts. Five epidemiological cohorts were designated as discovery cohorts (n=16 125), five as in-silico replication cohorts (n=9367), and five as de-novo replication cohorts (n=8504). 25-hydroxyvitamin D concentrations were measured by radioimmunoassay, chemiluminescent assay, ELISA, or mass spectrometry. Vitamin D insufficiency was defined as concentrations lower than 75 nmol/L or 50 nmol/L. We combined results of genome-wide analyses across cohorts using Z-score-weighted meta-analysis. Genotype scores were constructed for confirmed variants.\n\n\nFINDINGS\nVariants at three loci reached genome-wide significance in discovery cohorts for association with 25-hydroxyvitamin D concentrations, and were confirmed in replication cohorts: 4p12 (overall p=1.9x10(-109) for rs2282679, in GC); 11q12 (p=2.1x10(-27) for rs12785878, near DHCR7); and 11p15 (p=3.3x10(-20) for rs10741657, near CYP2R1). Variants at an additional locus (20q13, CYP24A1) were genome-wide significant in the pooled sample (p=6.0x10(-10) for rs6013897). Participants with a genotype score (combining the three confirmed variants) in the highest quartile were at increased risk of having 25-hydroxyvitamin D concentrations lower than 75 nmol/L (OR 2.47, 95% CI 2.20-2.78, p=2.3x10(-48)) or lower than 50 nmol/L (1.92, 1.70-2.16, p=1.0x10(-26)) compared with those in the lowest quartile.\n\n\nINTERPRETATION\nVariants near genes involved in cholesterol synthesis, hydroxylation, and vitamin D transport affect vitamin D status. Genetic variation at these loci identifies individuals who have substantially raised risk of vitamin D insufficiency.\n\n\nFUNDING\nFull funding sources listed at end of paper (see Acknowledgments).",
"title": ""
},
{
"docid": "neg:1840122_1",
"text": "In four experiments, this research sheds light on aesthetic experiences by rigorously investigating behavioral, neural, and psychological properties of package design. We find that aesthetic packages significantly increase the reaction time of consumers' choice responses; that they are chosen over products with well-known brands in standardized packages, despite higher prices; and that they result in increased activation in the nucleus accumbens and the ventromedial prefrontal cortex, according to functional magnetic resonance imaging (fMRI). The results suggest that reward value plays an important role in aesthetic product experiences. Further, a closer look at psychometric and neuroimaging data finds that a paper-and-pencil measure of affective product involvement correlates with aesthetic product experiences in the brain. Implications for future aesthetics research, package designers, and product managers are discussed. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840122_2",
"text": "This paper describes the interactive narrative experiences in Babyz, an interactive entertainment product for the PC currently in development at PF Magic / Mindscape in San Francisco, to be released in October 1999. Babyz are believable agents designed and implemented in the tradition of Dogz and Catz, Your Virtual Petz. As virtual human characters, Babyz are more intelligent, expressive and communicative than their Petz predecessors, allowing for both broader and deeper narrative possibilities. Babyz are designed with behaviors to support entertaining short-term narrative experiences, as well as long-term emotional relationships and narratives.",
"title": ""
},
{
"docid": "neg:1840122_3",
"text": "Spend your few moment to read a book even only few pages. Reading book is not obligation and force for everybody. When you don't want to read, you can get punishment from the publisher. Read a book becomes a choice of your different characteristics. Many people with reading habit will always be enjoyable to read, or on the contrary. For some reasons, this inductive logic programming techniques and applications tends to be the representative book in this website.",
"title": ""
},
{
"docid": "neg:1840122_4",
"text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.",
"title": ""
},
{
"docid": "neg:1840122_5",
"text": "Cyanobacteria are found globally due to their adaptation to various environments. The occurrence of cyanobacterial blooms is not a new phenomenon. The bloom-forming and toxin-producing species have been a persistent nuisance all over the world over the last decades. Evidence suggests that this trend might be attributed to a complex interplay of direct and indirect anthropogenic influences. To control cyanobacterial blooms, various strategies, including physical, chemical, and biological methods have been proposed. Nevertheless, the use of those strategies is usually not effective. The isolation of natural compounds from many aquatic and terrestrial plants and seaweeds has become an alternative approach for controlling harmful algae in aquatic systems. Seaweeds have received attention from scientists because of their bioactive compounds with antibacterial, antifungal, anti-microalgae, and antioxidant properties. The undesirable effects of cyanobacteria proliferations and potential control methods are here reviewed, focusing on the use of potent bioactive compounds, isolated from seaweeds, against microalgae and cyanobacteria growth.",
"title": ""
},
{
"docid": "neg:1840122_6",
"text": "In recent years, extensive research has been conducted in the area of Service Level Agreement (SLA) for utility computing systems. An SLA is a formal contract used to guarantee that consumers’ service quality expectation can be achieved. In utility computing systems, the level of customer satisfaction is crucial, making SLAs significantly important in these environments. Fundamental issue is the management of SLAs, including SLA autonomy management or trade off among multiple Quality of Service (QoS) parameters. Many SLA languages and frameworks have been developed as solutions; however, there is no overall classification for these extensive works. Therefore, the aim of this chapter is to present a comprehensive survey of how SLAs are created, managed and used in utility computing environment. We discuss existing use cases from Grid and Cloud computing systems to identify the level of SLA realization in state-of-art systems and emerging challenges for future research.",
"title": ""
},
{
"docid": "neg:1840122_7",
"text": "Memory units have been widely used to enrich the capabilities of deep networks on capturing long-term dependencies in reasoning and prediction tasks, but little investigation exists on deep generative models (DGMs) which are good at inferring high-level invariant representations from unlabeled data. This paper presents a deep generative model with a possibly large external memory and an attention mechanism to capture the local detail information that is often lost in the bottom-up abstraction process in representation learning. By adopting a smooth attention model, the whole network is trained end-to-end by optimizing a variational bound of data likelihood via auto-encoding variational Bayesian methods, where an asymmetric recognition network is learnt jointly to infer high-level invariant representations. The asymmetric architecture can reduce the competition between bottom-up invariant feature extraction and top-down generation of instance details. Our experiments on several datasets demonstrate that memory can significantly boost the performance of DGMs on various tasks, including density estimation, image generation, and missing value imputation, and DGMs with memory can achieve state-ofthe-art quantitative results.",
"title": ""
},
{
"docid": "neg:1840122_8",
"text": "Identifying building footprints is a critical and challenging problem in many remote sensing applications. Solutions to this problem have been investigated using a variety of sensing modalities as input. In this work, we consider the detection of building footprints from 3D Digital Surface Models (DSMs) created from commercial satellite imagery along with RGB orthorectified imagery. Recent public challenges (SpaceNet 1 and 2, DSTL Satellite Imagery Feature Detection Challenge, and the ISPRS Test Project on Urban Classification) approach this problem using other sensing modalities or higher resolution data. As a result of these challenges and other work, most publically available automated methods for building footprint detection using 2D and 3D data sources as input are meant for high-resolution 3D lidar and 2D airborne imagery, or make use of multispectral imagery as well to aid detection. Performance is typically degraded as the fidelity and post spacing of the 3D lidar data or the 2D imagery is reduced. Furthermore, most software packages do not work well enough with this type of data to enable a fully automated solution. We describe a public benchmark dataset consisting of 50 cm DSMs created from commercial satellite imagery, as well as coincident 50 cm RGB orthorectified imagery products. The dataset includes ground truth building outlines and we propose representative quantitative metrics for evaluating performance. In addition, we provide lessons learned and hope to promote additional research in this field by releasing this public benchmark dataset to the community.",
"title": ""
},
{
"docid": "neg:1840122_9",
"text": "Low power wide area (LPWA) networks are making spectacular progress from design, standardization, to commercialization. At this time of fast-paced adoption, it is of utmost importance to analyze how well these technologies will scale as the number of devices connected to the Internet of Things inevitably grows. In this letter, we provide a stochastic geometry framework for modeling the performance of a single gateway LoRa network, a leading LPWA technology. Our analysis formulates the unique peculiarities of LoRa, including its chirp spread-spectrum modulation technique, regulatory limitations on radio duty cycle, and use of ALOHA protocol on top, all of which are not as common in today’s commercial cellular networks. We show that the coverage probability drops exponentially as the number of end-devices grows due to interfering signals using the same spreading sequence. We conclude that this fundamental limiting factor is perhaps more significant toward LoRa scalability than for instance spectrum restrictions. Our derivations for co-spreading factor interference found in LoRa networks enables rigorous scalability analysis of such networks.",
"title": ""
},
{
"docid": "neg:1840122_10",
"text": "Over the past decade, crowdsourcing has emerged as a cheap and efficient method of obtaining solutions to simple tasks that are difficult for computers to solve but possible for humans. The popularity and promise of crowdsourcing markets has led to both empirical and theoretical research on the design of algorithms to optimize various aspects of these markets, such as the pricing and assignment of tasks. Much of the existing theoretical work on crowdsourcing markets has focused on problems that fall into the broad category of online decision making; task requesters or the crowdsourcing platform itself make repeated decisions about prices to set, workers to filter out, problems to assign to specific workers, or other things. Often these decisions are complex, requiring algorithms that learn about the distribution of available tasks or workers over time and take into account the strategic (or sometimes irrational) behavior of workers.\n As human computation grows into its own field, the time is ripe to address these challenges in a principled way. However, it appears very difficult to capture all pertinent aspects of crowdsourcing markets in a single coherent model. In this paper, we reflect on the modeling issues that inhibit theoretical research on online decision making for crowdsourcing, and identify some steps forward. This paper grew out of the authors' own frustration with these issues, and we hope it will encourage the community to attempt to understand, debate, and ultimately address them.",
"title": ""
},
{
"docid": "neg:1840122_11",
"text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.",
"title": ""
},
{
"docid": "neg:1840122_12",
"text": " The Short Messaging Service (SMS), one of the most successful cellular services, generating millions of dollars in revenue for mobile operators yearly. Current estimations indicate that billions of SMSs are sent every day. Nevertheless, text messaging is becoming a source of customer dissatisfaction due to the rapid surge of messaging abuse activities. Although spam is a well tackled problem in the email world, SMS spam experiences a yearly growth larger than 500%. In this paper we expand our previous analysis on SMS spam traffic from a tier-1 cellular operator presented in [1], aiming to highlight the main characteristics of such messaging fraud activity. Communication patterns of spammers are compared to those of legitimate cell-phone users and Machine to Machine (M2M) connected appliances. The results indicate that M2M systems exhibit communication profiles similar to spammers, which could mislead spam filters. We find the main geographical sources of messaging abuse in the US. We also find evidence of spammer mobility, voice and data traffic resembling the behavior of legitimate customers. Finally, we include new findings on the invariance of the main characteristics of spam messages and spammers over time. Also, we present results that indicate a clear device reuse strategy in SMS spam activities.",
"title": ""
},
{
"docid": "neg:1840122_13",
"text": "The paper presents a novel approach to unsupervised text summarization. The novelty lies in exploiting the diversity of concepts in text for summarization, which has not received much attention in the summarization literature. A diversity-based approach here is a principled generalization of Maximal Marginal Relevance criterion by Carbonell and Goldstein \\cite{carbonell-goldstein98}.\nWe propose, in addition, aninformation-centricapproach to evaluation, where the quality of summaries is judged not in terms of how well they match human-created summaries but in terms of how well they represent their source documents in IR tasks such document retrieval and text categorization.\nTo find the effectiveness of our approach under the proposed evaluation scheme, we set out to examine how a system with the diversity functionality performs against one without, using the BMIR-J2 corpus, a test data developed by a Japanese research consortium. The results demonstrate a clear superiority of a diversity based approach to a non-diversity based approach.",
"title": ""
},
{
"docid": "neg:1840122_14",
"text": "For large state-space Markovian Decision Problems MonteCarlo planning is one of the few viable approaches to find near-optimal solutions. In this paper we introduce a new algorithm, UCT, that applies bandit ideas to guide Monte-Carlo planning. In finite-horizon or discounted MDPs the algorithm is shown to be consistent and finite sample bounds are derived on the estimation error due to sampling. Experimental results show that in several domains, UCT is significantly more efficient than its alternatives.",
"title": ""
},
{
"docid": "neg:1840122_15",
"text": "SigTur/E-Destination is a Web-based system that provides personalized recommendations of touristic activities in the region of Tarragona. The activities are properly classified and labeled according to a specific ontology, which guides the reasoning process. The recommender takes into account many different kinds of data: demographic information, travel motivations, the actions of the user on the system, the ratings provided by the user, the opinions of users with similar demographic characteristics or similar tastes, etc. The system has been fully designed and implemented in the Science and Technology Park of Tourism and Leisure. The paper presents a numerical evaluation of the correlation between the recommendations and the user’s motivations, and a qualitative evaluation performed by end users. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840122_16",
"text": "Acknowledgements Research in areas where there are many possible paths to follow requires a keen eye for crucial issues. The study of learning systems is such an area. Through the years of working with Andy Barto and Rich Sutton, I have observed many instances of \" fluff cutting \" and the exposure of basic issues. I thank both Andy and Rich for the insights that have rubbed off on me. I also thank Andy for opening up an infinite world of perspectives on learning, ranging from engineering principles to neural processing theories. I thank Rich for showing me the most important step in doing \" science \" —simplify your questions by isolating the issues. Several people contributed to the readability of this dissertation. Andy spent much time carefully reading several drafts. Through his efforts the clarity is much improved. I thank Paul Utgoff, Michael Arbib, and Bill Kilmer for reading drafts of this dissertation and providing valuable criticisms. Paul provided a non-connectionist perspective that widened my view considerably. He never hesitated to work out differences in terms and methodologies that have been developed through research with connectionist vs. symbolic representations. I thank for commenting on an early draft and for many interesting discussions. and the AFOSR for starting and maintaining the research project that supported the work reported in this dis-sertation. I thank Susan Parker for the skill with which she administered the project. And I thank the COINS Department at UMass and the RCF Staff for the maintenance of the research computing environment. Much of the computer graphics software used to generate figures of this dissertation is based on graphics tools provided by Rich Sutton and Andy Cromarty. Most importantly, I thank Stacey and Joseph for always being there to lift my spirits while I pursued distant milestones and to share my excitement upon reaching them. Their faith and confidence helped me maintain a proper perspective. The difficulties of learning in multilayered networks of computational units has limited the use of connectionist systems in complex domains. This dissertation elucidates the issues of learning in a network's hidden units, and reviews methods for addressing these issues that have been developed through the years. Issues of learning in hidden units are shown to be analogous to learning issues for multilayer systems employing symbolic representations. Comparisons of a number of algorithms for learning in hidden units are made by applying them in …",
"title": ""
},
{
"docid": "neg:1840122_17",
"text": "Gamification of education is a developing approach for increasing learners’ motivation and engagement by incorporating game design elements in educational environments. With the growing popularity of gamification and yet mixed success of its application in educational contexts, the current review is aiming to shed a more realistic light on the research in this field by focusing on empirical evidence rather than on potentialities, beliefs or preferences. Accordingly, it critically examines the advancement in gamifying education. The discussion is structured around the used gamification mechanisms, the gamified subjects, the type of gamified learning activities, and the study goals, with an emphasis on the reliability and validity of the reported outcomes. To improve our understanding and offer a more realistic picture of the progress of gamification in education, consistent with the presented evidence, we examine both the outcomes reported in the papers and how they have been obtained. While the gamification in education is still a growing phenomenon, the review reveals that (i) insufficient evidence exists to support the long-term benefits of gamification in educational contexts; (ii) the practice of gamifying learning has outpaced researchers’ understanding of its mechanisms and methods; (iii) the knowledge of how to gamify an activity in accordance with the specifics of the educational context is still limited. The review highlights the need for systematically designed studies and rigorously tested approaches confirming the educational benefits of gamification, if gamified learning is to become a recognized instructional approach.",
"title": ""
},
{
"docid": "neg:1840122_18",
"text": "In any teaching and learning setting, there are some variables that play a highly significant role in both teachers’ and learners’ performance. Two of these influential psychological domains in educational context include self-efficacy and burnout. This study is conducted to investigate the relationship between the self-efficacy of Iranian teachers of English and their reports of burnout. The data was collected through application of two questionnaires. The Maslach Burnout Inventory (MBI; Maslach& Jackson 1981, 1986) and Teacher Efficacy Scales (Woolfolk& Hoy, 1990) were administered to ten university teachers. After obtaining the raw data, the SPSS software (version 16) was used to change the data into numerical interpretable forms. In order to determine the relationship between self-efficacy and teachers’ burnout, correlational analysis was employed. The results showed that participants’ self-efficacy has a reverse relationship with their burnout.",
"title": ""
},
{
"docid": "neg:1840122_19",
"text": "BACKGROUND\nAlthough many patients with venous thromboembolism require extended treatment, it is uncertain whether it is better to use full- or lower-intensity anticoagulation therapy or aspirin.\n\n\nMETHODS\nIn this randomized, double-blind, phase 3 study, we assigned 3396 patients with venous thromboembolism to receive either once-daily rivaroxaban (at doses of 20 mg or 10 mg) or 100 mg of aspirin. All the study patients had completed 6 to 12 months of anticoagulation therapy and were in equipoise regarding the need for continued anticoagulation. Study drugs were administered for up to 12 months. The primary efficacy outcome was symptomatic recurrent fatal or nonfatal venous thromboembolism, and the principal safety outcome was major bleeding.\n\n\nRESULTS\nA total of 3365 patients were included in the intention-to-treat analyses (median treatment duration, 351 days). The primary efficacy outcome occurred in 17 of 1107 patients (1.5%) receiving 20 mg of rivaroxaban and in 13 of 1127 patients (1.2%) receiving 10 mg of rivaroxaban, as compared with 50 of 1131 patients (4.4%) receiving aspirin (hazard ratio for 20 mg of rivaroxaban vs. aspirin, 0.34; 95% confidence interval [CI], 0.20 to 0.59; hazard ratio for 10 mg of rivaroxaban vs. aspirin, 0.26; 95% CI, 0.14 to 0.47; P<0.001 for both comparisons). Rates of major bleeding were 0.5% in the group receiving 20 mg of rivaroxaban, 0.4% in the group receiving 10 mg of rivaroxaban, and 0.3% in the aspirin group; the rates of clinically relevant nonmajor bleeding were 2.7%, 2.0%, and 1.8%, respectively. The incidence of adverse events was similar in all three groups.\n\n\nCONCLUSIONS\nAmong patients with venous thromboembolism in equipoise for continued anticoagulation, the risk of a recurrent event was significantly lower with rivaroxaban at either a treatment dose (20 mg) or a prophylactic dose (10 mg) than with aspirin, without a significant increase in bleeding rates. (Funded by Bayer Pharmaceuticals; EINSTEIN CHOICE ClinicalTrials.gov number, NCT02064439 .).",
"title": ""
}
] |
1840123 | vecteurs sphériques et interprétation géométrique des quaternions unitaires | [
{
"docid": "pos:1840123_0",
"text": "Some of the confusions concerning quaternions as they are employed in spacecraft attitude work are discussed. The order of quaternion multiplication is discussed in terms of its historical development and its consequences for the quaternion imaginaries. The di erent formulations for the quaternions are also contrasted. It is shown that the three Hamilton imaginaries cannot be interpreted as the basis of the vector space of physical vectors but only as constant numerical column vectors, the autorepresentation of a physical basis.",
"title": ""
}
] | [
{
"docid": "neg:1840123_0",
"text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.",
"title": ""
},
{
"docid": "neg:1840123_1",
"text": "The vital sign monitoring through Impulse Radio Ultra-Wide Band (IR-UWB) radar provides continuous assessment of a patient's respiration and heart rates in a non-invasive manner. In this paper, IR UWB radar is used for monitoring respiration and the human heart rate. The breathing and heart rate frequencies are extracted from the signal reflected from the human body. A Kalman filter is applied to reduce the measurement noise from the vital signal. An algorithm is presented to separate the heart rate signal from the breathing harmonics. An auto-correlation based technique is applied for detecting random body movements (RBM) during the measurement process. Experiments were performed in different scenarios in order to show the validity of the algorithm. The vital signs were estimated for the signal reflected from the chest, as well as from the back side of the body in different experiments. The results from both scenarios are compared for respiration and heartbeat estimation accuracy.",
"title": ""
},
{
"docid": "neg:1840123_2",
"text": "The restricted Boltzmann machine (RBM) has received an increasing amount of interest in recent years. It determines good mapping weights that capture useful latent features in an unsupervised manner. The RBM and its generalizations have been successfully applied to a variety of image classification and speech recognition tasks. However, most of the existing RBM-based models disregard the preservation of the data manifold structure. In many real applications, the data generally reside on a low-dimensional manifold embedded in high-dimensional ambient space. In this brief, we propose a novel graph regularized RBM to capture features and learning representations, explicitly considering the local manifold structure of the data. By imposing manifold-based locality that preserves constraints on the hidden layer of the RBM, the model ultimately learns sparse and discriminative representations. The representations can reflect data distributions while simultaneously preserving the local manifold structure of data. We test our model using several benchmark image data sets for unsupervised clustering and supervised classification problem. The results demonstrate that the performance of our method exceeds the state-of-the-art alternatives.",
"title": ""
},
{
"docid": "neg:1840123_3",
"text": "Weakly supervised image segmentation is an important yet challenging task in image processing and pattern recognition fields. It is defined as: in the training stage, semantic labels are only at the image-level, without regard to their specific object/scene location within the image. Given a test image, the goal is to predict the semantics of every pixel/superpixel. In this paper, we propose a new weakly supervised image segmentation model, focusing on learning the semantic associations between superpixel sets (graphlets in this paper). In particular, we first extract graphlets from each image, where a graphlet is a small-sized graph measures the potential of multiple spatially neighboring superpixels (i.e., the probability of these superpixels sharing a common semantic label, such as the sky or the sea). To compare different-sized graphlets and to incorporate image-level labels, a manifold embedding algorithm is designed to transform all graphlets into equal-length feature vectors. Finally, we present a hierarchical Bayesian network to capture the semantic associations between postembedding graphlets, based on which the semantics of each superpixel is inferred accordingly. Experimental results demonstrate that: 1) our approach performs competitively compared with the state-of-the-art approaches on three public data sets and 2) considerable performance enhancement is achieved when using our approach on segmentation-based photo cropping and image categorization.",
"title": ""
},
{
"docid": "neg:1840123_4",
"text": "It is unclear whether combined leg and arm high-intensity interval training (HIIT) improves fitness and morphological characteristics equal to those of leg-based HIIT programs. The aim of this study was to compare the effects of HIIT using leg-cycling (LC) and arm-cranking (AC) ergometers with an HIIT program using only LC. Effects on aerobic capacity and skeletal muscle were analyzed. Twelve healthy male subjects were assigned into two groups. One performed LC-HIIT (n=7) and the other LC- and AC-HIIT (n=5) twice weekly for 16 weeks. The training programs consisted of eight to 12 sets of >90% VO2 (the oxygen uptake that can be utilized in one minute) peak for 60 seconds with a 60-second active rest period. VO2 peak, watt peak, and heart rate were measured during an LC incremental exercise test. The cross-sectional area (CSA) of trunk and thigh muscles as well as bone-free lean body mass were measured using magnetic resonance imaging and dual-energy X-ray absorptiometry. The watt peak increased from baseline in both the LC (23%±38%; P<0.05) and the LC-AC groups (11%±9%; P<0.05). The CSA of the quadriceps femoris muscles also increased from baseline in both the LC (11%±4%; P<0.05) and the LC-AC groups (5%±5%; P<0.05). In contrast, increases were observed in the CSA of musculus psoas major (9%±11%) and musculus anterolateral abdominal (7%±4%) only in the LC-AC group. These results suggest that a combined LC- and AC-HIIT program improves aerobic capacity and muscle hypertrophy in both leg and trunk muscles.",
"title": ""
},
{
"docid": "neg:1840123_5",
"text": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization.",
"title": ""
},
{
"docid": "neg:1840123_6",
"text": "Life-span developmental psychology involves the study of constancy and change in behavior throughout the life course. One aspect of life-span research has been the advancement of a more general, metatheoretical view on the nature of development. The family of theoretical perspectives associated with this metatheoretical view of life-span developmental psychology includes the recognition of multidirectionality in ontogenetic change, consideration of both age-connected and disconnected developmental factors, a focus on the dynamic and continuous interplay between growth (gain) and decline (loss), emphasis on historical embeddedness and other structural contextual factors, and the study of the range of plasticity in development. Application of the family of perspectives associated with life-span developmental psychology is illustrated for the domain of intellectual development. Two recently emerging perspectives of the family of beliefs are given particular attention. The first proposition is methodological and suggests that plasticity can best be studied with a research strategy called testing-the-limits. The second proposition is theoretical and proffers that any developmental change includes the joint occurrence of gain (growth) and loss (decline) in adaptive capacity. To assess the pattern of positive (gains) and negative (losses) consequences resulting from development, it is necessary to know the criterion demands posed by the individual and the environment during the lifelong process of adaptation.",
"title": ""
},
{
"docid": "neg:1840123_7",
"text": "Modern software systems are typically large and complex, making comprehension of these systems extremely difficult. Experienced programmers comprehend code by seamlessly processing synonyms and other word relations. Thus, we believe that automated comprehension and software tools can be significantly improved by leveraging word relations in software. In this paper, we perform a comparative study of six state of the art, English-based semantic similarity techniques and evaluate their effectiveness on words from the comments and identifiers in software. Our results suggest that applying English-based semantic similarity techniques to software without any customization could be detrimental to the performance of the client software tools. We propose strategies to customize the existing semantic similarity techniques to software, and describe how various program comprehension tools can benefit from word relation information.",
"title": ""
},
{
"docid": "neg:1840123_8",
"text": "Special Issue Anthony Vance Brigham Young University anthony@vance.name Bonnie Brinton Anderson Brigham Young University bonnie_anderson@byu.edu C. Brock Kirwan Brigham Young University kirwan@byu.edu Users’ perceptions of risks have important implications for information security because individual users’ actions can compromise entire systems. Therefore, there is a critical need to understand how users perceive and respond to information security risks. Previous research on perceptions of information security risk has chiefly relied on self-reported measures. Although these studies are valuable, risk perceptions are often associated with feelings—such as fear or doubt—that are difficult to measure accurately using survey instruments. Additionally, it is unclear how these self-reported measures map to actual security behavior. This paper contributes to this topic by demonstrating that risk-taking behavior is effectively predicted using electroencephalography (EEG) via event-related potentials (ERPs). Using the Iowa Gambling Task, a widely used technique shown to be correlated with real-world risky behaviors, we show that the differences in neural responses to positive and negative feedback strongly predict users’ information security behavior in a separate laboratory-based computing task. In addition, we compare the predictive validity of EEG measures to that of self-reported measures of information security risk perceptions. Our experiments show that self-reported measures are ineffective in predicting security behaviors under a condition in which information security is not salient. However, we show that, when security concerns become salient, self-reported measures do predict security behavior. Interestingly, EEG measures significantly predict behavior in both salient and non-salient conditions, which indicates that EEG measures are a robust predictor of security behavior.",
"title": ""
},
{
"docid": "neg:1840123_9",
"text": "Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for sequential decision-making problems such as Go and video games, but their performance can be poor when the planning depth and sampling trajectories are limited or when the rewards are sparse. We present an adaptation of PGRD (policy-gradient for rewarddesign) for learning a reward-bonus function to improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which the space of reward-bonus functions was limited to linear functions of hand-coded state-action-features, we use PGRD with a multi-layer convolutional neural network to automatically learn features from raw perception as well as to adapt the non-linear reward-bonus function parameters. We also adopt a variance-reducing gradient method to improve PGRD’s performance. The new method improves UCT’s performance on multiple ATARI games compared to UCT without the reward bonus. Combining PGRD and Deep Learning in this way should make adapting rewards for MCTS algorithms far more widely and practically applicable than before.",
"title": ""
},
{
"docid": "neg:1840123_10",
"text": "The Simulink/Stateflow toolset is an integrated suite enabling model-based design and has become popular in the automotive and aeronautics industries. We have previously developed a translator called Simtolus from Simulink to the synchronous language Lustre and we build upon that work by encompassing Stateflow as well. Stateflow is problematical for synchronous languages because of its unbounded behaviour so we propose analysis techniques to define a subset of Stateflow for which we can define a synchronous semantics. We go further and define a \"safe\" subset of Stateflow which elides features which are potential sources of errors in Stateflow designs. We give an informal presentation of the Stateflow to Lustre translation process and show how our model-checking tool Lesar can be used to verify some of the semantical checks we have proposed. Finally, we present a small case-study.",
"title": ""
},
{
"docid": "neg:1840123_11",
"text": "Emerging interest of trading companies and hedge funds in mining social web has created new avenues for intelligent systems that make use of public opinion in driving investment decisions. It is well accepted that at high frequency trading, investors are tracking memes rising up in microblogging forums to count for the public behavior as an important feature while making short term investment decisions. We investigate the complex relationship between tweet board literature (like bullishness, volume, agreement etc) with the financial market instruments (like volatility, trading volume and stock prices). We have analyzed Twitter sentiments for more than 4 million tweets between June 2010 and July 2011 for DJIA, NASDAQ-100 and 11 other big cap technological stocks. Our results show high correlation (upto 0.88 for returns) between stock prices and twitter sentiments. Further, using Granger’s Causality Analysis, we have validated that the movement of stock prices and indices are greatly affected in the short term by Twitter discussions. Finally, we have implemented Expert Model Mining System (EMMS) to demonstrate that our forecasted returns give a high value of R-square (0.952) with low Maximum Absolute Percentage Error (MaxAPE) of 1.76% for Dow Jones Industrial Average (DJIA). We introduce a novel way to make use of market monitoring elements derived from public mood to retain a portfolio within limited risk state (highly improved hedging bets) during typical market conditions.",
"title": ""
},
{
"docid": "neg:1840123_12",
"text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.",
"title": ""
},
{
"docid": "neg:1840123_13",
"text": "Latent topic model such as Latent Dirichlet Allocation (LDA) has been designed for text processing and has also demonstrated success in the task of audio related processing. The main idea behind LDA assumes that the words of each document arise from a mixture of topics, each of which is a multinomial distribution over the vocabulary. When applying the original LDA to process continuous data, the wordlike unit need be first generated by vector quantization (VQ). This data discretization usually results in information loss. To overcome this shortage, this paper introduces a new topic model named GaussianLDA for audio retrieval. In the proposed model, we consider continuous emission probability, Gaussian instead of multinomial distribution. This new topic model skips the vector quantization and directly models each topic as a Gaussian distribution over audio features. It avoids discretization by this way and integrates the procedure of clustering. The experiments of audio retrieval demonstrate that GaussianLDA achieves better performance than other compared methods. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840123_14",
"text": "This paper reviews the recent progress of quantum-dot semiconductor optical amplifiers developed as ultrawideband polarization-insensitive high-power amplifiers, high-speed signal regenerators, and wideband wavelength converters. A semiconductor optical amplifier having a gain of > 25 dB, noise figure of < 5 dB, and 3-dB saturation output power of > 20 dBm, over the record widest bandwidth of 90 nm among all kinds of optical amplifiers, and also having a penalty-free output power of 23 dBm, the record highest among all the semiconductor optical amplifiers, was realized by using quantum dots. By utilizing isotropically shaped quantum dots, the TM gain, which is absent in the standard Stranski-Krastanow QDs, has been drastically enhanced, and nearly polarization-insensitive SOAs have been realized for the first time. With an ultrafast gain response unique to quantum dots, an optical regenerator having receiver-sensitivity improving capability of 4 dB at a BER of 10-9 and operating speed of > 40 Gb/s has been successfully realized with an SOA chip. This performance achieved together with simplicity of structure suggests a potential for low-cost realization of regenerative transmission systems.",
"title": ""
},
{
"docid": "neg:1840123_15",
"text": "Learning from multiple sources of information is an important problem in machine-learning research. The key challenges are learning representations and formulating inference methods that take into account the complementarity and redundancy of various information sources. In this paper we formulate a variational autoencoder based multi-source learning framework in which each encoder is conditioned on a different information source. This allows us to relate the sources via the shared latent variables by computing divergence measures between individual source’s posterior approximations. We explore a variety of options to learn these encoders and to integrate the beliefs they compute into a consistent posterior approximation. We visualise learned beliefs on a toy dataset and evaluate our methods for learning shared representations and structured output prediction, showing trade-offs of learning separate encoders for each information source. Furthermore, we demonstrate how conflict detection and redundancy can increase robustness of inference in a multi-source setting.",
"title": ""
},
{
"docid": "neg:1840123_16",
"text": "A large amount of food photos are taken in restaurants for diverse reasons. This dish recognition problem is very challenging, due to different cuisines, cooking styles and the intrinsic difficulty of modeling food from its visual appearance. Contextual knowledge is crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and geolocation of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then we reformulate the problem using a probabilistic model connecting dishes, restaurants and geolocations. We apply that model in three different tasks: dish recognition, restaurant recognition and geolocation refinement. Experiments on a dataset including 187 restaurants and 701 dishes show that combining multiple evidences (visual, geolocation, and external knowledge) can boost the performance in all tasks.",
"title": ""
},
{
"docid": "neg:1840123_17",
"text": "Learning through experience is time-consuming, inefficient and often bad for your cortisol levels. To address this problem, a number of recently proposed teacherstudent methods have demonstrated the benefits of private tuition, in which a single model learns from an ensemble of more experienced tutors. Unfortunately, the cost of such supervision restricts good representations to a privileged minority. Unsupervised learning can be used to lower tuition fees, but runs the risk of producing networks that require extracurriculum learning to strengthen their CVs and create their own LinkedIn profiles1. Inspired by the logo on a promotional stress ball at a local recruitment fair, we make the following three contributions. First, we propose a novel almost no supervision training algorithm that is effective, yet highly scalable in the number of student networks being supervised, ensuring that education remains affordable. Second, we demonstrate our approach on a typical use case: learning to bake, developing a method that tastily surpasses the current state of the art. Finally, we provide a rigorous quantitive analysis of our method, proving that we have access to a calculator2. Our work calls into question the long-held dogma that life is the best teacher. Give a student a fish and you feed them for a day, teach a student to gatecrash seminars and you feed them until the day they move to Google.",
"title": ""
},
{
"docid": "neg:1840123_18",
"text": "Organic agriculture (OA) is practiced on 1% of the global agricultural land area and its importance continues to grow. Specifically, OA is perceived by many as having less Advances inAgronomy, ISSN 0065-2113 © 2016 Elsevier Inc. http://dx.doi.org/10.1016/bs.agron.2016.05.003 All rights reserved. 1 ARTICLE IN PRESS",
"title": ""
}
] |
1840124 | Simulation of a photovoltaic panels by using Matlab/Simulink | [
{
"docid": "pos:1840124_0",
"text": "This paper proposes a novel simplified two-diode model of a photovoltaic (PV) module. The main aim of this study is to represent a PV module as an ideal two-diode model. In order to reduce computational time, the proposed model has a photocurrent source, i.e., two ideal diodes, neglecting the series and shunt resistances. Only four unknown parameters from the datasheet are required in order to analyze the proposed model. The simulation results that are obtained by MATLAB/Simulink are validated with experimental data of a commercial PV module, using different PV technologies such as multicrystalline and monocrystalline, supplied by the manufacturer. It is envisaged that this work can be useful for professionals who require a simple and accurate PV simulator for their design.",
"title": ""
}
] | [
{
"docid": "neg:1840124_0",
"text": "Crime tends to clust er geographi cally. This has led to the wide usage of hotspot analysis to identify and visualize crime. Accurately identified crime hotspots can greatly benefit the public by creating accurate threat visualizations, more efficiently allocating police resources, and predicting crime. Yet existing mapping methods usually identify hotspots without considering the underlying correlates of crime. In this study, we introduce a spatial data mining framework to study crime hotspots through their related variables. We use Geospatial Discriminative Patterns (GDPatterns) to capture the significant difference between two classes (hotspots and normal areas) in a geo-spatial dataset. Utilizing GDPatterns, we develop a novel model—Hotspot Optimization Tool (HOT)—to improve the identification of crime hotspots. Finally, based on a similarity measure, we group GDPattern clusters and visualize the distribution and characteristics of crime related variables. We evaluate our approach using a real world dataset collected from a northeast city in the United States. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840124_1",
"text": "Wireless Sensor Networks (WSNs) are crucial in supporting continuous environmental monitoring, where sensor nodes are deployed and must remain operational to collect and transfer data from the environment to a base-station. However, sensor nodes have limited energy in their primary power storage unit, and this energy may be quickly drained if the sensor node remains operational over long periods of time. Therefore, the idea of harvesting ambient energy from the immediate surroundings of the deployed sensors, to recharge the batteries and to directly power the sensor nodes, has recently been proposed. The deployment of energy harvesting in environmental field systems eliminates the dependency of sensor nodes on battery power, drastically reducing the maintenance costs required to replace batteries. In this article, we review the state-of-the-art in energy-harvesting WSNs for environmental monitoring applications, including Animal Tracking, Air Quality Monitoring, Water Quality Monitoring, and Disaster Monitoring to improve the ecosystem and human life. In addition to presenting the technologies for harvesting energy from ambient sources and the protocols that can take advantage of the harvested energy, we present challenges that must be addressed to further advance energy-harvesting-based WSNs, along with some future work directions to address these challenges.",
"title": ""
},
{
"docid": "neg:1840124_2",
"text": "We propose a multigrid extension of convolutional neural networks (CNNs). Rather than manipulating representations living on a single spatial grid, our network layers operate across scale space, on a pyramid of grids. They consume multigrid inputs and produce multigrid outputs, convolutional filters themselves have both within-scale and cross-scale extent. This aspect is distinct from simple multiscale designs, which only process the input at different scales. Viewed in terms of information flow, a multigrid network passes messages across a spatial pyramid. As a consequence, receptive field size grows exponentially with depth, facilitating rapid integration of context. Most critically, multigrid structure enables networks to learn internal attention and dynamic routing mechanisms, and use them to accomplish tasks on which modern CNNs fail. Experiments demonstrate wide-ranging performance advantages of multigrid. On CIFAR and ImageNet classification tasks, flipping from a single grid to multigrid within the standard CNN paradigm improves accuracy, while being compute and parameter efficient. Multigrid is independent of other architectural choices, we show synergy in combination with residual connections. Multigrid yields dramatic improvement on a synthetic semantic segmentation dataset. Most strikingly, relatively shallow multigrid networks can learn to directly perform spatial transformation tasks, where, in contrast, current CNNs fail. Together, our results suggest that continuous evolution of features on a multigrid pyramid is a more powerful alternative to existing CNN designs on a flat grid.",
"title": ""
},
{
"docid": "neg:1840124_3",
"text": "A fully integrated SONET OC-192 transmitter IC using a standard CMOS process consists of an input data register, FIFO, CMU, and 16:1 multiplexer to give a 10Gb/s serial output. A higher FEC rate, 10.7Gb/s, is supported. This chip, using a 0.18/spl mu/m process, exceeds SONET requirements, dissipating 450mW.",
"title": ""
},
{
"docid": "neg:1840124_4",
"text": "Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.",
"title": ""
},
{
"docid": "neg:1840124_5",
"text": "Recently, interests on cleaning robots workable in pipes (termed as in-pipe cleaning robot) are increasing because Garbage Automatic Collection Facilities (i.e, GACF) are widely being installed in Seoul metropolitan area of Korea. So far research on in-pipe robot has been focused on inspection rather than cleaning. In GACF, when garbage is moving, we have to remove the impurities which are stuck to the inner face of the pipe (diameter: 300mm or 400mm). Thus, in this paper, by using TRIZ (Inventive Theory of Problem Solving in Russian abbreviation), we will propose an in-pipe cleaning robot of GACF with the 6-link sliding mechanism which can be adjusted to fit into the inner face of pipe using pneumatic pressure(not spring). The proposed in-pipe cleaning robot for GACF can have forward/backward movement itself as well as rotation of brush in cleaning. The robot body should have the limited size suitable for the smaller pipe with diameter of 300mm. In addition, for the pipe with diameter of 400mm, the links of robot should stretch to fit into the diameter of the pipe by using the sliding mechanism. Based on the conceptual design using TRIZ, we will set up the initial design of the robot in collaboration with a field engineer of Robot Valley, Inc. in Korea. For the optimal design of in-pipe cleaning robot, the maximum impulsive force of collision between the robot and the inner face of pipe is simulated by using RecurDyn® when the link of sliding mechanism is stretched to fit into the 400mm diameter of the pipe. The stresses exerted on the 6 links of sliding mechanism by the maximum impulsive force will be simulated by using ANSYS® Workbench based on the Design Of Experiment(in short DOE). Finally the optimal dimensions including thicknesses of 4 links will be decided in order to have the best safety factor as 2 in this paper as well as having the minimum mass of 4 links. It will be verified that the optimal design of 4 links has the best safety factor close to 2 as well as having the minimum mass of 4 links, compared with the initial design performed by the expert of Robot Valley, Inc. In addition, the prototype of in-pipe cleaning robot will be stated with further research.",
"title": ""
},
{
"docid": "neg:1840124_6",
"text": "BACKGROUND\nDolutegravir (GSK1349572), a once-daily HIV integrase inhibitor, has shown potent antiviral response and a favourable safety profile. We evaluated safety, efficacy, and emergent resistance in antiretroviral-experienced, integrase-inhibitor-naive adults with HIV-1 with at least two-class drug resistance.\n\n\nMETHODS\nING111762 (SAILING) is a 48 week, phase 3, randomised, double-blind, active-controlled, non-inferiority study that began in October, 2010. Eligible patients had two consecutive plasma HIV-1 RNA assessments of 400 copies per mL or higher (unless >1000 copies per mL at screening), resistance to two or more classes of antiretroviral drugs, and had one to two fully active drugs for background therapy. Participants were randomly assigned (1:1) to once-daily dolutegravir 50 mg or twice-daily raltegravir 400 mg, with investigator-selected background therapy. Matching placebo was given, and study sites were masked to treatment assignment. The primary endpoint was the proportion of patients with plasma HIV-1 RNA less than 50 copies per mL at week 48, evaluated in all participants randomly assigned to treatment groups who received at least one dose of study drug, excluding participants at one site with violations of good clinical practice. Non-inferiority was prespecified with a 12% margin; if non-inferiority was established, then superiority would be tested per a prespecified sequential testing procedure. A key prespecified secondary endpoint was the proportion of patients with treatment-emergent integrase-inhibitor resistance. The trial is registered at ClinicalTrials.gov, NCT01231516.\n\n\nFINDINGS\nAnalysis included 715 patients (354 dolutegravir; 361 raltegravir). At week 48, 251 (71%) patients on dolutegravir had HIV-1 RNA less than 50 copies per mL versus 230 (64%) patients on raltegravir (adjusted difference 7·4%, 95% CI 0·7 to 14·2); superiority of dolutegravir versus raltegravir was then concluded (p=0·03). Significantly fewer patients had virological failure with treatment-emergent integrase-inhibitor resistance on dolutegravir (four vs 17 patients; adjusted difference -3·7%, 95% CI -6·1 to -1·2; p=0·003). Adverse event frequencies were similar across groups; the most commonly reported events for dolutegravir versus raltegravir were diarrhoea (71 [20%] vs 64 [18%] patients), upper respiratory tract infection (38 [11%] vs 29 [8%]), and headache (33 [9%] vs 31 [9%]). Safety events leading to discontinuation were infrequent in both groups (nine [3%] dolutegravir, 14 [4%] raltegravir).\n\n\nINTERPRETATION\nOnce-daily dolutegravir, in combination with up to two other antiretroviral drugs, is well tolerated with greater virological effect compared with twice-daily raltegravir in this treatment-experienced patient group.\n\n\nFUNDING\nViiV Healthcare.",
"title": ""
},
{
"docid": "neg:1840124_7",
"text": "Public private partnerships (PPP) are long lasting contracts, generally involving large sunk investments, and developed in contexts of great uncertainty. If uncertainty is taken as an assumption, rather as a threat, it could be used as an opportunity. This requires managerial flexibility. The paper addresses the concept of contract flexibility as well as the several possibilities for its incorporation into PPP development. Based upon existing classifications, the authors propose a double entry matrix as a new model for contract flexibility. A case study has been selected – a hospital – to assess and evaluate the benefits of developing a flexible contract, building a model based on the real options theory. The evidence supports the initial thesis that allowing the concessionaire to adapt, under certain boundaries, the infrastructure and services to changing conditions when new information is known, does increase the value of the project. Some policy implications are drawn. © 2012 Elsevier Ltd. APM and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840124_8",
"text": "If two translation systems differ differ in performance on a test set, can we trust that this indicates a difference in true system quality? To answer this question, we describe bootstrap resampling methods to compute statistical significance of test results, and validate them on the concrete example of the BLEU score. Even for small test sizes of only 300 sentences, our methods may give us assurances that test result differences are real.",
"title": ""
},
{
"docid": "neg:1840124_9",
"text": "Twitter is one of the most popular social media platforms that has 313 million monthly active users which post 500 million tweets per day. This popularity attracts the attention of spammers who use Twitter for their malicious aims such as phishing legitimate users or spreading malicious software and advertises through URLs shared within tweets, aggressively follow/unfollow legitimate users and hijack trending topics to attract their attention, propagating pornography. In August of 2014, Twitter revealed that 8.5% of its monthly active users which equals approximately 23 million users have automatically contacted their servers for regular updates. Thus, detecting and filtering spammers from legitimate users are mandatory in order to provide a spam-free environment in Twitter. In this paper, features of Twitter spam detection presented with discussing their effectiveness. Also, Twitter spam detection methods are categorized and discussed with their pros and cons. The outdated features of Twitter which are commonly used by Twitter spam detection approaches are highlighted. Some new features of Twitter which, to the best of our knowledge, have not been mentioned by any other works are also presented. Keywords—Twitter spam; spam detection; spam filtering;",
"title": ""
},
{
"docid": "neg:1840124_10",
"text": "Mobile devices are increasingly the dominant Internet access technology. Nevertheless, high costs, data caps, and throttling are a source of widespread frustration, and a significant barrier to adoption in emerging markets. This paper presents Flywheel, an HTTP proxy service that extends the life of mobile data plans by compressing responses in-flight between origin servers and client browsers. Flywheel is integrated with the Chrome web browser and reduces the size of proxied web pages by 50% for a median user. We report measurement results from millions of users as well as experience gained during three years of operating and evolving the production",
"title": ""
},
{
"docid": "neg:1840124_11",
"text": "Traditional address scanning attacks mainly rely on the naive 'brute forcing' approach, where the entire IPv4 address space is exhaustively searched by enumerating different possibilities. However, such an approach is inefficient for IPv6 due to its vast subnet size (i.e., 2^64). As a result, it is widely assumed that address scanning attacks are less feasible in IPv6 networks. In this paper, we evaluate new IPv6 reconnaissance techniques in real IPv6 networks and expose how to leverage the Domain Name System (DNS) for IPv6 network reconnaissance. We collected IPv6 addresses from 5 regions and 100,000 domains by exploiting DNS reverse zone and DNSSEC records. We propose a DNS Guard (DNSG) to efficiently detect DNS reconnaissance attacks in IPv6 networks. DNSG is a plug and play component that could be added to the existing infrastructure. We implement DNSG using Bro and Suricata. Our results demonstrate that DNSG could effectively block DNS reconnaissance attacks.",
"title": ""
},
{
"docid": "neg:1840124_12",
"text": "Microwave power transmission (MPT) has had a long history before the more recent movement toward wireless power transmission (WPT). MPT can be applied not only to beam-type point-to-point WPT but also to an energy harvesting system fed from distributed or broadcasting radio waves. The key technology is the use of a rectenna, or rectifying antenna, to convert a microwave signal to a DC signal with high efficiency. In this paper, various rectennas suitable for MPT are discussed, including various rectifying circuits, frequency rectennas, and power rectennas.",
"title": ""
},
{
"docid": "neg:1840124_13",
"text": "High-voltage (HV) pulses are used in pulsed electric field (PEF) applications to provide an effective electroporation process, a process in which harmful microorganisms are disinfected when subjected to a PEF. Depending on the PEF application, different HV pulse specifications are required such as the pulse-waveform shape, the voltage magnitude, the pulse duration, and the pulse repetition rate. In this paper, a generic pulse-waveform generator (GPG) is proposed, and the GPG topology is based on half-bridge modular multilevel converter (HB-MMC) cells. The GPG topology is formed of four identical arms of series-connected HB-MMC cells forming an H-bridge. Unlike the conventional HB-MMC-based converters in HVdc transmission, the GPG load power flow is not continuous which leads to smaller size cell capacitors utilization; hence, smaller footprint of the GPG is achieved. The GPG topology flexibility allows the controller software to generate a basic multilevel waveform which can be manipulated to generate the commonly used PEF pulse waveforms. Therefore, the proposed topology offers modularity, redundancy, and scalability. The viability of the proposed GPG converter is validated by MATLAB/Simulink simulation and experimentation.",
"title": ""
},
{
"docid": "neg:1840124_14",
"text": "This paper presents a technology review of voltage-source-converter topologies for industrial medium-voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high-power voltage-source inverter and the most used multilevel-inverter topologies, including the neutral-point-clamped, cascaded H-bridge, and flying-capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation-method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.",
"title": ""
},
{
"docid": "neg:1840124_15",
"text": "Neuronal activity causes local changes in cerebral blood flow, blood volume, and blood oxygenation. Magnetic resonance imaging (MRI) techniques sensitive to changes in cerebral blood flow and blood oxygenation were developed by high-speed echo planar imaging. These techniques were used to obtain completely noninvasive tomographic maps of human brain activity, by using visual and motor stimulus paradigms. Changes in blood oxygenation were detected by using a gradient echo (GE) imaging sequence sensitive to the paramagnetic state of deoxygenated hemoglobin. Blood flow changes were evaluated by a spin-echo inversion recovery (IR), tissue relaxation parameter T1-sensitive pulse sequence. A series of images were acquired continuously with the same imaging pulse sequence (either GE or IR) during task activation. Cine display of subtraction images (activated minus baseline) directly demonstrates activity-induced changes in brain MR signal observed at a temporal resolution of seconds. During 8-Hz patterned-flash photic stimulation, a significant increase in signal intensity (paired t test; P less than 0.001) of 1.8% +/- 0.8% (GE) and 1.8% +/- 0.9% (IR) was observed in the primary visual cortex (V1) of seven normal volunteers. The mean rise-time constant of the signal change was 4.4 +/- 2.2 s for the GE images and 8.9 +/- 2.8 s for the IR images. The stimulation frequency dependence of visual activation agrees with previous positron emission tomography observations, with the largest MR signal response occurring at 8 Hz. Similar signal changes were observed within the human primary motor cortex (M1) during a hand squeezing task and in animal models of increased blood flow by hypercapnia. By using intrinsic blood-tissue contrast, functional MRI opens a spatial-temporal window onto individual brain physiology.",
"title": ""
},
{
"docid": "neg:1840124_16",
"text": "A new low offset dynamic comparator for high resolution high speed analog-to-digital application has been designed. Inputs are reconfigured from the typical differential pair comparator such that near equal current distribution in the input transistors can be achieved for a meta-stable point of the comparator. Restricted signal swing clock for the tail current is also used to ensure constant currents in the differential pairs. Simulation based sensitivity analysis is performed to demonstrate the robustness of the new comparator with respect to stray capacitances, common mode voltage errors and timing errors in a TSMC 0.18mu process. Less than 10mV offset can be easily achieved with the proposed structure making it favorable for flash and pipeline data conversion applications",
"title": ""
},
{
"docid": "neg:1840124_17",
"text": "Momentum methods play a central role in optimization. Several momentum methods are provably optimal, and all use a technique called estimate sequences to analyze their convergence properties. The technique of estimate sequences has long been considered difficult to understand, leading many researchers to generate alternative, “more intuitive” methods and analyses. In this paper we show there is an equivalence between the technique of estimate sequences and a family of Lyapunov functions in both continuous and discrete time. This framework allows us to develop a simple and unified analysis of many existing momentum algorithms, introduce several new algorithms, and most importantly, strengthen the connection between algorithms and continuous-time dynamical systems.",
"title": ""
},
{
"docid": "neg:1840124_18",
"text": "This paper presents a novel orthomode transducer (OMT) with the dimension of WR-10 waveguide. The internal structure of the OMT is in the shape of Y so we named it a Y-junction OMT, it contain one square waveguide port with the dimension 2.54mm × 2.54mm and two WR-10 rectangular waveguide ports with the dimension of 1.27mm × 2.54mm. The operating frequency band of OMT is 70-95GHz (more than 30% bandwidth) with simulated insertion loss <;-0.3dB and cross polarization better than -40dB throughout the band for both TE10 and TE01 modes.",
"title": ""
}
] |
1840125 | Iterative Entity Alignment via Joint Knowledge Embeddings | [
{
"docid": "pos:1840125_0",
"text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: daniel.rinser@alumni.hpi.uni-potsdam.de (Daniel Rinser), dustin.lange@hpi.uni-potsdam.de (Dustin Lange), naumann@hpi.uni-potsdam.de (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions",
"title": ""
}
] | [
{
"docid": "neg:1840125_0",
"text": "Obesity is a multifactorial disease that results from a combination of both physiological, genetic, and environmental inputs. Obesity is associated with adverse health consequences, including T2DM, cardiovascular disease, musculoskeletal disorders, obstructive sleep apnea, and many types of cancer. The probability of developing adverse health outcomes can be decreased with maintained weight loss of 5% to 10% of current body weight. Body mass index and waist circumference are 2 key measures of body fat. A wide variety of tools are available to assess obesity-related risk factors and guide management.",
"title": ""
},
{
"docid": "neg:1840125_1",
"text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays",
"title": ""
},
{
"docid": "neg:1840125_2",
"text": "Preference learning is a fundamental problem in various smart computing applications such as personalized recommendation. Collaborative filtering as a major learning technique aims to make use of users’ feedback, for which some recent works have switched from exploiting explicit feedback to implicit feedback. One fundamental challenge of leveraging implicit feedback is the lack of negative feedback, because there is only some observed relatively “positive” feedback available, making it difficult to learn a prediction model. In this paper, we propose a new and relaxed assumption of pairwise preferences over item-sets, which defines a user’s preference on a set of items (item-set) instead of on a single item only. The relaxed assumption can give us more accurate pairwise preference relationships. With this assumption, we further develop a general algorithm called CoFiSet (collaborative filtering via learning pairwise preferences over item-sets), which contains four variants, CoFiSet(SS), CoFiSet(MOO), CoFiSet(MOS) and CoFiSet(MSO), representing “Set vs. Set,” “Many ‘One vs. One’,” “Many ‘One vs. Set”’ and “Many ‘Set vs. One”’ pairwise comparisons, respectively. Experimental results show that our CoFiSet(MSO) performs better than several state-of-the-art methods on five ranking-oriented evaluation metrics on three real-world data sets.",
"title": ""
},
{
"docid": "neg:1840125_3",
"text": "The paper gives an overview on the developments at the German Aerospace Center DLR towards anthropomorphic robots which not only tr y to approach the force and velocity performance of humans, but also have simi lar safety and robustness features based on a compliant behaviour. We achieve thi s compliance either by joint torque sensing and impedance control, or, in our newes t systems, by compliant mechanisms (so called VIA variable impedance actuators), whose intrinsic compliance can be adjusted by an additional actuator. Both appr o ches required highly integrated mechatronic design and advanced, nonlinear con trol a d planning strategies, which are presented in this paper.",
"title": ""
},
{
"docid": "neg:1840125_4",
"text": "Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.",
"title": ""
},
{
"docid": "neg:1840125_5",
"text": "Malicious Android applications are currently the biggest threat in the scope of mobile security. To cope with their exponential growth and with their deceptive and hideous behaviors, static analysis signature based approaches are not enough to timely detect and tackle brand new threats such as polymorphic and composition malware. This work presents BRIDEMAID, a novel framework for analysis of Android apps' behavior, which exploits both a static and dynamic approach to detect malicious apps directly on mobile devices. The static analysis is based on n-grams matching to statically recognize malicious app execution patterns. The dynamic analysis is instead based on multi-level monitoring of device, app and user behavior to detect and prevent at runtime malicious behaviors. The framework has been tested against 2794 malicious apps reporting a detection accuracy of 99,7% and a negligible false positive rate, tested on a set of 10k genuine apps.",
"title": ""
},
{
"docid": "neg:1840125_6",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "neg:1840125_7",
"text": "Opinion mining and sentiment analysis have become popular in linguistic resource rich languages. Opinions for such analysis are drawn from many forms of freely available online/electronic sources, such as websites, blogs, news re-ports and product reviews. But attention received by less resourced languages is significantly less. This is because the success of any opinion mining algorithm depends on the availability of resources, such as special lexicon and WordNet type tools. In this research, we implemented a less complicated but an effective approach that could be used to classify comments in less resourced languages. We experimented the approach for use with Sinhala Language where no such opinion mining or sentiment analysis has been carried out until this day. Our algorithm gives significantly promising results for analyzing sentiments in Sinhala for the first time.",
"title": ""
},
{
"docid": "neg:1840125_8",
"text": "The purpose of this study was to survey the mental toughness and physical activity among student university of Tabriz. Baecke physical activity questionnaire, mental thoughness48 and demographic questionnaire was distributed between students. 355 questionnaires were collected. Correlation, , multiple ANOVA and independent t-test was used for analyzing the hypotheses. The result showed that there was significant relationship between some of physical activity and mental toughness subscales. Two groups active and non-active were compared to find out the mental toughness differences, Student who obtained the 75% upper the physical activity questionnaire was active (n=97) and Student who obtained the 25% under the physical activity questionnaire was inactive group (n=95).The difference between active and non-active physically people showed that active student was significantly mentally toughness. It is expected that changes in physical activity levels significantly could be evidence of mental toughness changes, it should be noted that the other variables should not be ignored.",
"title": ""
},
{
"docid": "neg:1840125_9",
"text": "There is a trend in the scientific community to model and solve complex optimization problems by employing natural metaphors. This is mainly due to inefficiency of classical optimization algorithms in solving larger scale combinatorial and/or highly non-linear problems. The situation is not much different if integer and/or discrete decision variables are required in most of the linear optimization models as well. One of the main characteristics of the classical optimization algorithms is their inflexibility to adapt the solution algorithm to a given problem. Generally a given problem is modelled in such a way that a classical algorithm like simplex algorithm can handle it. This generally requires making several assumptions which might not be easy to validate in many situations. In order to overcome these limitations more flexible and adaptable general purpose algorithms are needed. It should be easy to tailor these algorithms to model a given problem as close as to reality. Based on this motivation many nature inspired algorithms were developed in the literature like genetic algorithms, simulated annealing and tabu search. It has also been shown that these algorithms can provide far better solutions in comparison to classical algorithms. A branch of nature inspired algorithms which are known as swarm intelligence is focused on insect behaviour in order to develop some meta-heuristics which can mimic insect's problem solution abilities. Ant colony optimization, particle swarm optimization, wasp nets etc. are some of the well known algorithms that mimic insect behaviour in problem modelling and solution. Artificial Bee Colony (ABC) is a relatively new member of swarm intelligence. ABC tries to model natural behaviour of real honey bees in food foraging. Honey bees use several mechanisms like waggle dance to optimally locate food sources and to search new ones. This makes them a good candidate for developing new intelligent search algorithms. In this chapter an extensive review of work on artificial bee algorithms is given. Afterwards, development of an ABC algorithm for solving generalized assignment problem which is known as NP-hard problem is presented in detail along with some comparisons. It is a well known fact that classical optimization techniques impose several limitations on solving mathematical programming and operational research models. This is mainly due to inherent solution mechanisms of these techniques. Solution strategies of classical optimization algorithms are generally depended on the type of objective and constraint",
"title": ""
},
{
"docid": "neg:1840125_10",
"text": "This paper addresses the problem of single image depth estimation (SIDE), focusing on improving the accuracy of deep neural network predictions. In a supervised learning scenario, the quality of predictions is intrinsically related to the training labels, which guide the optimization process. For indoor scenes, structured-light-based depth sensors (e.g. Kinect) are able to provide dense, albeit short-range, depth maps. On the other hand, for outdoor scenes, LiDARs are still considered the standard sensor, which comparatively provide much sparser measurements, especially in areas further away. Rather than modifying the neural network architecture to deal with sparse depth maps, this article introduces a novel densification method for depth maps, using the Hilbert Maps framework. A continuous occupancy map is produced based on 3D points from LiDAR scans, and the resulting reconstructed surface is projected into a 2D depth map with arbitrary resolution. Experiments conducted with various subsets of the KITTI dataset show a significant improvement produced by the proposed Sparse-to-Continuous technique, without the introduction of extra information into the training stage.",
"title": ""
},
{
"docid": "neg:1840125_11",
"text": "An empirical study was performed to train naive subjects in the use of a prototype Boolean logic-based information retrieval system on a bibliographic database. Subjects were undergraduates with little or no prior computing experience. Subjects trained with a conceptual model of the system performed better than subjects trained with procedural instructions, but only on complex, problem-solving tasks. Performance was equal on simple tasks. Differences in patterns of interaction with the system (based on a stochastic process model) showed parallel results. Most subjects were able to articulate some description of the system's operation, but few articulated a model similar to the card catalog analogy provided in training. Eleven of 43 subjects were unable to achieve minimal competency in system use. The failure rate was equal between training conditions and genders; the only differences found between those passing and failing the benchmark test were academic major and in frequency of library use.",
"title": ""
},
{
"docid": "neg:1840125_12",
"text": "Background: We recently described “Author-ity,” a model for estimating the probability that two articles in MEDLINE, sharing the same author name, were written by the same individual. Features include shared title words, journal name, coauthors, medical subject headings, language, affiliations, and author name features (middle initial, suffix, and prevalence in MEDLINE). Here we test the hypothesis that the Author-ity model will suffice to disambiguate author names for the vast majority of articles in MEDLINE. Methods: Enhancements include: (a) incorporating first names and their variants, email addresses, and correlations between specific last names and affiliation words; (b) new methods of generating large unbiased training sets; (c) new methods for estimating the prior probability; (d) a weighted least squares algorithm for correcting transitivity violations; and (e) a maximum likelihood based agglomerative algorithm for computing clusters of articles that represent inferred author-individuals. Results: Pairwise comparisons were computed for all author names on all 15.3 million articles in MEDLINE (2006 baseline), that share last name and first initial, to create Author-ity 2006, a database that has each name on each article assigned to one of 6.7 million inferred author-individual clusters. Recall is estimated at ∼98.8%. Lumping (putting two different individuals into the same cluster) affects ∼0.5% of clusters, whereas splitting (assigning articles written by the same individual to >1 cluster) affects ∼2% of articles. Impact: The Author-ity model can be applied generally to other bibliographic databases. Author name disambiguation allows information retrieval and data integration to become person-centered, not just document-centered, setting the stage for new data mining and social network tools that will facilitate the analysis of scholarly publishing and collaboration behavior. Availability: The Author-ity 2006 database is available for nonprofit academic research, and can be freely queried via http://arrowsmith.psych.uic.edu.",
"title": ""
},
{
"docid": "neg:1840125_13",
"text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.",
"title": ""
},
{
"docid": "neg:1840125_14",
"text": "We present a content-based method for recommending citations in an academic paper draft. We embed a given query document into a vector space, then use its nearest neighbors as candidates, and rerank the candidates using a discriminative model trained to distinguish between observed and unobserved citations. Unlike previous work, our method does not require metadata such as author names which can be missing, e.g., during the peer review process. Without using metadata, our method outperforms the best reported results on PubMed and DBLP datasets with relative improvements of over 18% in F1@20 and over 22% in MRR. We show empirically that, although adding metadata improves the performance on standard metrics, it favors selfcitations which are less useful in a citation recommendation setup. We release an online portal for citation recommendation based on our method,1 and a new dataset OpenCorpus of 7 million research articles to facilitate future research on this task.",
"title": ""
},
{
"docid": "neg:1840125_15",
"text": "This article considers the delivery of efficient and effective dental services for patients whose disability and/or medical condition may not be obvious and which consequently can present a hidden challenge in the dental setting. Knowing that the patient has a particular condition, what its features are and how it impacts on dental treatment and oral health, and modifying treatment accordingly can minimise the risk of complications. The taking of a careful medical history that asks the right questions in a manner that encourages disclosure is key to highlighting hidden hazards and this article offers guidance for treating those patients who have epilepsy, latex sensitivity, acquired or inherited bleeding disorders and patients taking oral or intravenous bisphosphonates.",
"title": ""
},
{
"docid": "neg:1840125_16",
"text": "Evaluation of network security is an essential step in securing any network. This evaluation can help security professionals in making optimal decisions about how to design security countermeasures, to choose between alternative security architectures, and to systematically modify security configurations in order to improve security. However, the security of a network depends on a number of dynamically changing factors such as emergence of new vulnerabilities and threats, policy structure and network traffic. Identifying, quantifying and validating these factors using security metrics is a major challenge in this area. In this paper, we propose a novel security metric framework that identifies and quantifies objectively the most significant security risk factors, which include existing vulnerabilities, historical trend of vulnerability of the remotely accessible services, prediction of potential vulnerabilities for any general network service and their estimated severity and finally policy resistance to attack propagation within the network. We then describe our rigorous validation experiments using real- life vulnerability data of the past 6 years from National Vulnerability Database (NVD) [10] to show the high accuracy and confidence of the proposed metrics. Some previous works have considered vulnerabilities using code analysis. However, as far as we know, this is the first work to study and analyze these metrics for network security evaluation using publicly available vulnerability information and security policy configuration.",
"title": ""
},
{
"docid": "neg:1840125_17",
"text": "In this paper, we describe our approach to RecSys 2015 challenge problem. Given a dataset of item click sessions, the problem is to predict whether a session results in a purchase and which items are purchased if the answer is yes.\n We define a simpler analogous problem where given an item and its session, we try to predict the probability of purchase for the given item. For each session, the predictions result in a set of purchased items or often an empty set.\n We apply monthly time windows over the dataset. For each item in a session, we engineer features regarding the session, the item properties, and the time window. Then, a balanced random forest classifier is trained to perform predictions on the test set.\n The dataset is particularly challenging due to privacy-preserving definition of a session, the class imbalance problem, and the volume of data. We report our findings with respect to feature engineering, the choice of sampling schemes, and classifier ensembles. Experimental results together with benefits and shortcomings of the proposed approach are discussed. The solution is efficient and practical in commodity computers.",
"title": ""
},
{
"docid": "neg:1840125_18",
"text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.",
"title": ""
}
] |
1840126 | Multiobjective Combinatorial Optimization by Using Decomposition and Ant Colony | [
{
"docid": "pos:1840126_0",
"text": "We describe an artificial ant colony capable of solving the travelling salesman problem (TSP). Ants of the artificial colony are able to generate successively shorter feasible tours by using information accumulated in the form of a pheromone trail deposited on the edges of the TSP graph. Computer simulations demonstrate that the artificial ant colony is capable of generating good solutions to both symmetric and asymmetric instances of the TSP. The method is an example, like simulated annealing, neural networks and evolutionary computation, of the successful use of a natural metaphor to design an optimization algorithm.",
"title": ""
}
] | [
{
"docid": "neg:1840126_0",
"text": "This paper presents a graph signal denoising method with the trilateral filter defined in the graph spectral domain. The original trilateral filter (TF) is a data-dependent filter that is widely used as an edge-preserving smoothing method for image processing. However, because of the data-dependency, one cannot provide its frequency domain representation. To overcome this problem, we establish the graph spectral domain representation of the data-dependent filter, i.e., a spectral graph TF (SGTF). This representation enables us to design an effective graph signal denoising filter with a Tikhonov regularization. Moreover, for the proposed graph denoising filter, we provide a parameter optimization technique to search for a regularization parameter that approximately minimizes the mean squared error w.r.t. the unknown graph signal of interest. Comprehensive experimental results validate our graph signal processing-based approach for images and graph signals.",
"title": ""
},
{
"docid": "neg:1840126_1",
"text": "A new image decomposition scheme, called the adaptive directional total variation (ADTV) model, is proposed to achieve effective segmentation and enhancement for latent fingerprint images in this work. The proposed model is inspired by the classical total variation models, but it differentiates itself by integrating two unique features of fingerprints; namely, scale and orientation. The proposed ADTV model decomposes a latent fingerprint image into two layers: cartoon and texture. The cartoon layer contains unwanted components (e.g., structured noise) while the texture layer mainly consists of the latent fingerprint. This cartoon-texture decomposition facilitates the process of segmentation, as the region of interest can be easily detected from the texture layer using traditional segmentation methods. The effectiveness of the proposed scheme is validated through experimental results on the entire NIST SD27 latent fingerprint database. The proposed scheme achieves accurate segmentation and enhancement results, leading to improved feature detection and latent matching performance.",
"title": ""
},
{
"docid": "neg:1840126_2",
"text": "Seasonal affective disorder (SAD) is a syndrome characterized by recurrent depressions that occur annually at the same time each year. We describe 29 patients with SAD; most of them had a bipolar affective disorder, especially bipolar II, and their depressions were generally characterized by hypersomnia, overeating, and carbohydrate craving and seemed to respond to changes in climate and latitude. Sleep recordings in nine depressed patients confirmed the presence of hypersomnia and showed increased sleep latency and reduced slow-wave (delta) sleep. Preliminary studies in 11 patients suggest that extending the photoperiod with bright artificial light has an antidepressant effect.",
"title": ""
},
{
"docid": "neg:1840126_3",
"text": "Existing RGB-D object recognition methods either use channel specific handcrafted features, or learn features with deep networks. The former lack representation ability while the latter require large amounts of training data and learning time. In real-time robotics applications involving RGB-D sensors, we do not have the luxury of both. In this paper, we propose Localized Deep Extreme Learning Machines (LDELM) that efficiently learn features from RGB-D data. By using localized patches, not only is the problem of data sparsity solved, but the learned features are robust to occlusions and viewpoint variations. LDELM learns deep localized features in an unsupervised way from random patches of the training data. Each image is then feed-forwarded, patch-wise, through the LDELM to form a cuboid of features. The cuboid is divided into cells and pooled to get the final compact image representation which is then used to train an ELM classifier. Experiments on the benchmark Washington RGB-D and 2D3D datasets show that the proposed algorithm not only is significantly faster to train but also outperforms state-of-the-art methods in terms of accuracy and classification time.",
"title": ""
},
{
"docid": "neg:1840126_4",
"text": "A new class of target link flooding attacks (LFA) can cut off the Internet connections of a target area without being detected because they employ legitimate flows to congest selected links. Although new mechanisms for defending against LFA have been proposed, the deployment issues limit their usages since they require modifying routers. In this paper, we propose LinkScope, a novel system that employs both the end-to-end and the hopby-hop network measurement techniques to capture abnormal path performance degradation for detecting LFA and then correlate the performance data and traceroute data to infer the target links or areas. Although the idea is simple, we tackle a number of challenging issues, such as conducting large-scale Internet measurement through noncooperative measurement, assessing the performance on asymmetric Internet paths, and detecting LFA. We have implemented LinkScope with 7174 lines of C codes and the extensive evaluation in a testbed and the Internet show that LinkScope can quickly detect LFA with high accuracy and low false positive rate.",
"title": ""
},
{
"docid": "neg:1840126_5",
"text": "Public opinion polarization is here conceived as a process of alignment along multiple lines of potential disagreement and measured as growing constraint in individuals' preferences. Using NES data from 1972 to 2004, the authors model trends in issue partisanship-the correlation of issue attitudes with party identification-and issue alignment-the correlation between pairs of issues-and find a substantive increase in issue partisanship, but little evidence of issue alignment. The findings suggest that opinion changes correspond more to a resorting of party labels among voters than to greater constraint on issue attitudes: since parties are more polarized, they are now better at sorting individuals along ideological lines. Levels of constraint vary across population subgroups: strong partisans and wealthier and politically sophisticated voters have grown more coherent in their beliefs. The authors discuss the consequences of partisan realignment and group sorting on the political process and potential deviations from the classic pluralistic account of American politics.",
"title": ""
},
{
"docid": "neg:1840126_6",
"text": "BACKGROUND\nPlasma brain natriuretic peptide (BNP) level increases in proportion to the degree of right ventricular dysfunction in pulmonary hypertension. We sought to assess the prognostic significance of plasma BNP in patients with primary pulmonary hypertension (PPH).\n\n\nMETHODS AND RESULTS\nPlasma BNP was measured in 60 patients with PPH at diagnostic catheterization, together with atrial natriuretic peptide, norepinephrine, and epinephrine. Measurements were repeated in 53 patients after a mean follow-up period of 3 months. Forty-nine of the patients received intravenous or oral prostacyclin. During a mean follow-up period of 24 months, 18 patients died of cardiopulmonary causes. According to multivariate analysis, baseline plasma BNP was an independent predictor of mortality. Patients with a supramedian level of baseline BNP (>/=150 pg/mL) had a significantly lower survival rate than those with an inframedian level, according to Kaplan-Meier survival curves (P<0.05). Plasma BNP in survivors decreased significantly during the follow-up (217+/-38 to 149+/-30 pg/mL, P<0. 05), whereas that in nonsurvivors increased (365+/-77 to 544+/-68 pg/mL, P<0.05). Thus, survival was strikingly worse for patients with a supramedian value of follow-up BNP (>/=180 pg/mL) than for those with an inframedian value (P<0.0001).\n\n\nCONCLUSIONS\nA high level of plasma BNP, and in particular, a further increase in plasma BNP during follow-up, may have a strong, independent association with increased mortality rates in patients with PPH.",
"title": ""
},
{
"docid": "neg:1840126_7",
"text": "Detection of true human emotions has attracted a lot of interest in the recent years. The applications range from e-retail to health-care for developing effective companion systems with reliable emotion recognition. This paper proposes heart rate variability (HRV) features extracted from photoplethysmogram (PPG) signal obtained from a cost-effective PPG device such as Pulse Oximeter for detecting and recognizing the emotions on the basis of the physiological signals. The HRV features obtained from both time and frequency domain are used as features for classification of emotions. These features are extracted from the entire PPG signal obtained during emotion elicitation and baseline neutral phase. For analyzing emotion recognition, using the proposed HRV features, standard video stimuli are used. We have considered three emotions namely, happy, sad and neutral or null emotions. Support vector machines are used for developing the models and features are explored to achieve average emotion recognition of 83.8% for the above model and listed features.",
"title": ""
},
{
"docid": "neg:1840126_8",
"text": "As multi-core processors proliferate, it has become more important than ever to ensure efficient execution of parallel jobs on multiprocessor systems. In this paper, we study the problem of scheduling parallel jobs with arbitrary release time on multiprocessors while minimizing the jobs’ mean response time. We focus on non-clairvoyant scheduling schemes that adaptively reallocate processors based on periodic feedbacks from the individual jobs. Since it is known that no deterministic non-clairvoyant algorithm is competitive for this problem,we focus on resource augmentation analysis, and show that two adaptive algorithms, Agdeq and Abgdeq, achieve competitive performance using O(1) times faster processors than the adversary. These results are obtained through a general framework for analyzing the mean response time of any two-level adaptive scheduler. Our simulation results verify the effectiveness of Agdeq and Abgdeq by evaluating their performances over a wide range of workloads consisting of synthetic parallel jobs with different parallelism characteristics.",
"title": ""
},
{
"docid": "neg:1840126_9",
"text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.",
"title": ""
},
{
"docid": "neg:1840126_10",
"text": "Parallelizing existing sequential programs to run efficiently on multicores is hard. The Java 5 package java.util.concurrent (j.u.c.) supports writing concurrent programs: much of the complexity of writing thread-safe and scalable programs is hidden in the library. To use this package, programmers still need to reengineer existing code. This is tedious because it requires changing many lines of code, is error-prone because programmers can use the wrong APIs, and is omission-prone because programmers can miss opportunities to use the enhanced APIs. This paper presents our tool, Concurrencer, that enables programmers to refactor sequential code into parallel code that uses three j.u.c. concurrent utilities. Concurrencer does not require any program annotations. Its transformations span multiple, non-adjacent, program statements. A find-and-replace tool can not perform such transformations, which require program analysis. Empirical evaluation shows that Concurrencer refactors code effectively: Concurrencer correctly identifies and applies transformations that some open-source developers overlooked, and the converted code exhibits good speedup.",
"title": ""
},
{
"docid": "neg:1840126_11",
"text": "The aim of the experimental study described in this article is to investigate the effect of a lifelike character with subtle expressivity on the affective state of users. The character acts as a quizmaster in the context of a mathematical game. This application was chosen as a simple, and for the sake of the experiment, highly controllable, instance of human–computer interfaces and software. Subtle expressivity refers to the character’s affective response to the user’s performance by emulating multimodal human–human communicative behavior such as different body gestures and varying linguistic style. The impact of em-pathic behavior, which is a special form of affective response, is examined by deliberately frustrating the user during the game progress. There are two novel aspects in this investigation. First, we employ an animated interface agent to address the affective state of users rather than a text-based interface, which has been used in related research. Second, while previous empirical studies rely on questionnaires to evaluate the effect of life-like characters, we utilize physiological information of users (in addition to questionnaire data) in order to precisely associate the occurrence of interface events with users’ autonomic nervous system activity. The results of our study indicate that empathic character response can significantly decrease user stress see front matter r 2004 Elsevier Ltd. All rights reserved. .ijhcs.2004.11.009 cle is a significantly revised and extended version of Prendinger et al. (2003). nding author. Tel.: +813 4212 2650; fax: +81 3 3556 1916. dresses: helmut@nii.ac.jp (H. Prendinger), jmori@miv.t.u-tokyo.ac.jp (J. Mori), v.t.u-tokyo.ac.jp (M. Ishizuka).",
"title": ""
},
{
"docid": "neg:1840126_12",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "neg:1840126_13",
"text": "We address the problem of distance metric learning (DML), defined as learning a distance consistent with a notion of semantic similarity. Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship – an anchor point x is similar to a set of positive points Y , and dissimilar to a set of negative points Z, and a loss defined over these distances is minimized. While the specifics of the optimization differ, in this work we collectively call this type of supervision Triplets and all methods that follow this pattern Triplet-Based methods. These methods are challenging to optimize. A main issue is the need for finding informative triplets, which is usually achieved by a variety of tricks such as increasing the batch size, hard or semi-hard triplet mining, etc. Even with these tricks, the convergence rate of such methods is slow. In this paper we propose to optimize the triplet loss on a different space of triplets, consisting of an anchor data point and similar and dissimilar proxy points which are learned as well. These proxies approximate the original data points, so that a triplet loss over the proxies is a tight upper bound of the original loss. This proxy-based loss is empirically better behaved. As a result, the proxy-loss improves on state-of-art results for three standard zero-shot learning datasets, by up to 15% points, while converging three times as fast as other triplet-based losses.",
"title": ""
},
{
"docid": "neg:1840126_14",
"text": "Deep neural networks have achieved near-human accuracy levels in various types of classification and prediction tasks including images, text, speech, and video data. However, the networks continue to be treated mostly as black-box function approximators, mapping a given input to a classification output. The next step in this human-machine evolutionary process — incorporating these networks into mission critical processes such as medical diagnosis, planning and control — requires a level of trust association with the machine output. Typically, statistical metrics are used to quantify the uncertainty of an output. However, the notion of trust also depends on the visibility that a human has into the working of the machine. In other words, the neural network should provide human-understandable justifications for its output leading to insights about the inner workings. We call such models as interpretable deep networks. Interpretability is not a monolithic notion. In fact, the subjectivity of an interpretation, due to different levels of human understanding, implies that there must be a multitude of dimensions that together constitute interpretability. In addition, the interpretation itself can be provided either in terms of the low-level network parameters, or in terms of input features used by the model. In this paper, we outline some of the dimensions that are useful for model interpretability, and categorize prior work along those dimensions. In the process, we perform a gap analysis of what needs to be done to improve model interpretability.",
"title": ""
},
{
"docid": "neg:1840126_15",
"text": "Monocular visual odometry (VO) and simultaneous localization and mapping (SLAM) have seen tremendous improvements in accuracy, robustness, and efficiency, and have gained increasing popularity over recent years. Nevertheless, not so many discussions have been carried out to reveal the influences of three very influential yet easily overlooked aspects, such as photometric calibration, motion bias, and rolling shutter effect. In this work, we evaluate these three aspects quantitatively on the state of the art of direct, feature-based, and semi-direct methods, providing the community with useful practical knowledge both for better applying existing methods and developing new algorithms of VO and SLAM. Conclusions (some of which are counterintuitive) are drawn with both technical and empirical analyses to all of our experiments. Possible improvements on existing methods are directed or proposed, such as a subpixel accuracy refinement of oriented fast and rotated brief (ORB)-SLAM, which boosts its performance.",
"title": ""
},
{
"docid": "neg:1840126_16",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "neg:1840126_17",
"text": "We present a new type of actuatable display, called Tilt Displays, that provide visual feedback combined with multi-axis tilting and vertical actuation. Their ability to physically mutate provides users with an additional information channel that facilitates a range of new applications including collaboration and tangible entertainment while enhancing familiar applications such as terrain modelling by allowing 3D scenes to be rendered in a physical-3D manner. Through a mobile 3x3 custom built prototype, we examine the design space around Tilt Displays, categorise output modalities and conduct two user studies. The first, an exploratory study examines users' initial impressions of Tilt Displays and probes potential interactions and uses. The second takes a quantitative approach to understand interaction possibilities with such displays, resulting in the production of two user-defined gesture sets: one for manipulating the surface of the Tilt Display, the second for conducting everyday interactions.",
"title": ""
},
{
"docid": "neg:1840126_18",
"text": "We evaluated the cytotoxic effects of four prostaglandin analogs (PGAs) used to treat glaucoma. First we established primary cultures of conjunctival stromal cells from healthy donors. Then cell cultures were incubated with different concentrations (0, 0.1, 1, 5, 25, 50 and 100%) of commercial formulations of bimatoprost, tafluprost, travoprost and latanoprost for increasing periods (5 and 30 min, 1 h, 6 h and 24 h) and cell survival was assessed with three different methods: WST-1, MTT and calcein/AM-ethidium homodimer-1 assays. Our results showed that all PGAs were associated with a certain level of cell damage, which correlated significantly with the concentration of PGA used, and to a lesser extent with culture time. Tafluprost tended to be less toxic than bimatoprost, travoprost and latanoprost after all culture periods. The results for WST-1, MTT and calcein/AM-ethidium homodimer-1 correlated closely. When the average lethal dose 50 was calculated, we found that the most cytotoxic drug was latanoprost, whereas tafluprost was the most sparing of the ocular surface in vitro. These results indicate the need to design novel PGAs with high effectiveness but free from the cytotoxic effects that we found, or at least to obtain drugs that are functional at low dosages. The fact that the commercial formulation of tafluprost used in this work was preservative-free may support the current tendency to eliminate preservatives from eye drops for clinical use.",
"title": ""
},
{
"docid": "neg:1840126_19",
"text": "Shannnon entropy is an efficient tool to measure uncertain information. However, it cannot handle the more uncertain situation when the uncertainty is represented by basic probability assignment (BPA), instead of probability distribution, under the framework of Dempster Shafer evidence theory. To address this issue, a new entropy, named as Deng entropy, is proposed. The proposed Deng entropy is the generalization of Shannnon entropy. If uncertain information is represented by probability distribution, the uncertain degree measured by Deng entropy is the same as that of Shannnon’s entropy. Some numerical examples are illustrated to shown the efficiency of Deng entropy.",
"title": ""
}
] |
1840127 | Understanding student learning trajectories using multimodal learning analytics within an embodied-interaction learning environment | [
{
"docid": "pos:1840127_0",
"text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.",
"title": ""
},
{
"docid": "pos:1840127_1",
"text": "Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. Two research communities -- Educational Data Mining (EDM) and Learning Analytics and Knowledge (LAK) have developed separately to address this need. This paper argues for increased and formal communication and collaboration between these communities in order to share research, methods, and tools for data mining and analysis in the service of developing both LAK and EDM fields.",
"title": ""
}
] | [
{
"docid": "neg:1840127_0",
"text": "The long-term ambition of the Tactile Internet is to enable a democratization of skill, and how it is being delivered globally. An integral part of this is to be able to transmit touch in perceived real-time, which is enabled by suitable robotics and haptics equipment at the edges, along with an unprecedented communications network. The fifth generation (5G) mobile communications systems will underpin this emerging Internet at the wireless edge. This paper presents the most important technology concepts, which lay at the intersection of the larger Tactile Internet and the emerging 5G systems. The paper outlines the key technical requirements and architectural approaches for the Tactile Internet, pertaining to wireless access protocols, radio resource management aspects, next generation core networking capabilities, edge-cloud, and edge-AI capabilities. The paper also highlights the economic impact of the Tactile Internet as well as a major shift in business models for the traditional telecommunications ecosystem.",
"title": ""
},
{
"docid": "neg:1840127_1",
"text": "We present a small object sensitive method for object detection. Our method is built based on SSD (Single Shot MultiBox Detector (Liu et al. 2016)), a simple but effective deep neural network for image object detection. The discrete nature of anchor mechanism used in SSD, however, may cause misdetection for the small objects located at gaps between the anchor boxes. SSD performs better for small object detection after circular shifts of the input image. Therefore, auxiliary feature maps are generated by conducting circular shifts over lower extra feature maps in SSD for small-object detection, which is equivalent to shifting the objects in order to fit the locations of anchor boxes. We call our proposed system Shifted SSD. Moreover, pinpoint accuracy of localization is of vital importance to small objects detection. Hence, two novel methods called Smooth NMS and IoU-Prediction module are proposed to obtain more precise locations. Then for video sequences, we generate trajectory hypothesis to obtain predicted locations in a new frame for further improved performance. Experiments conducted on PASCAL VOC 2007, along with MS COCO, KITTI and our small object video datasets, validate that both mAP and recall are improved with different degrees and the speed is almost the same as SSD.",
"title": ""
},
{
"docid": "neg:1840127_2",
"text": "Face identification and detection has become very popular, interesting and wide field of current research area. As there are several algorithms for face detection exist but none of the algorithms globally detect all sorts of human faces among the different colors and intensities in a given picture. In this paper, a novel method for face detection technique has been described. Here, the centers of both the eyes are detected using generic eye template matching method. After detecting the center of both the eyes, the corresponding face bounding box is determined. The experimental results have shown that the proposed algorithm is able to accomplish successfully proper detection and to mark the exact face and eye region in the given image.",
"title": ""
},
{
"docid": "neg:1840127_3",
"text": "The most data-efficient algorithms for reinforcement learning (RL) in robotics are based on uncertain dynamical models: after each episode, they first learn a dynamical model of the robot, then they use an optimization algorithm to find a policy that maximizes the expected return given the model and its uncertainties. It is often believed that this optimization can be tractable only if analytical, gradient-based algorithms are used; however, these algorithms require using specific families of reward functions and policies, which greatly limits the flexibility of the overall approach. In this paper, we introduce a novel model-based RL algorithm, called Black-DROPS (Black-box Data-efficient RObot Policy Search) that: (1) does not impose any constraint on the reward function or the policy (they are treated as black-boxes), (2) is as data-efficient as the state-of-the-art algorithm for data-efficient RL in robotics, and (3) is as fast (or faster) than analytical approaches when several cores are available. The key idea is to replace the gradient-based optimization algorithm with a parallel, black-box algorithm that takes into account the model uncertainties. We demonstrate the performance of our new algorithm on two standard control benchmark problems (in simulation) and a low-cost robotic manipulator (with a real robot).",
"title": ""
},
{
"docid": "neg:1840127_4",
"text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.",
"title": ""
},
{
"docid": "neg:1840127_5",
"text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.",
"title": ""
},
{
"docid": "neg:1840127_6",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "neg:1840127_7",
"text": "DISC Measure, Squeezer, Categorical Data Clustering, Cosine similarity References Rishi Sayal and Vijay Kumar. V. 2011. A novel Similarity Measure for Clustering Categorical Data Sets. International Journal of Computer Application (0975-8887). Aditya Desai, Himanshu Singh and Vikram Pudi. 2011. DISC Data-Intensive Similarity Measure for Categorical Data. Pacific-Asia Conferences on Knowledge Discovery Data Mining. Shyam Boriah, Varun Chandola and Vipin Kumar. 2008. Similarity Measure for Clustering Categorical Data. Comparative Evaluation. SIAM International Conference on Data Mining-SDM. Taoying Li, Yan Chen. 2009. Fuzzy Clustering Ensemble Algorithm for partitional Categorical Data. IEEE, International conference on Business Intelligence and Financial Engineering.",
"title": ""
},
{
"docid": "neg:1840127_8",
"text": "Previously, ANSI/IEEE relay current transformer (CT) sizing criteria were based on traditional symmetrical calculations that are usually discussed by technical articles and manufacturers' guidelines. In 1996, IEEE Standard C37.110-1996 introduced (1+X/R) offset multiplying, current asymmetry, and current distortion factors, officially changing the CT sizing guideline. A critical concern is the performance of fast protective schemes (instantaneous or differential elements) during severe saturation of low-ratio CTs. Will the instantaneous element operate before the upstream breaker relay trips? Will the differential element misoperate for out-of-zone faults? The use of electromagnetic and analog relay technology does not assure selectivity. Modern microprocessor relays introduce additional uncertainty into the design/verification process with different sampling techniques and proprietary sensing/recognition/trip algorithms. This paper discusses the application of standard CT accuracy classes with modern ANSI/IEEE CT calculation methodology. This paper is the first of a two-part series; Part II provides analytical waveform analysis discussions to illustrate the concepts conveyed in Part I",
"title": ""
},
{
"docid": "neg:1840127_9",
"text": "The soaring demand for intelligent mobile applications calls for deploying powerful deep neural networks (DNNs) on mobile devices. However, the outstanding performance of DNNs notoriously relies on increasingly complex models, which in turn is associated with an increase in computational expense far surpassing mobile devices’ capacity. What is worse, app service providers need to collect and utilize a large volume of users’ data, which contain sensitive information, to build the sophisticated DNN models. Directly deploying these models on public mobile devices presents prohibitive privacy risk. To benefit from the on-device deep learning without the capacity and privacy concerns, we design a private model compression framework RONA. Following the knowledge distillation paradigm, we jointly use hint learning, distillation learning, and self learning to train a compact and fast neural network. The knowledge distilled from the cumbersome model is adaptively bounded and carefully perturbed to enforce differential privacy. We further propose an elegant query sample selection method to reduce the number of queries and control the privacy loss. A series of empirical evaluations as well as the implementation on an Android mobile device show that RONA can not only compress cumbersome models efficiently but also provide a strong privacy guarantee. For example, on SVHN, when a meaningful (9.83, 10−6)-differential privacy is guaranteed, the compact model trained by RONA can obtain 20× compression ratio and 19× speed-up with merely 0.97% accuracy loss.",
"title": ""
},
{
"docid": "neg:1840127_10",
"text": "The requirement for new flexible adaptive grippers is the ability to detect and recognize objects in their environments. It is known that robotic manipulators are highly nonlinear systems, and an accurate mathematical model is difficult to obtain, thus making it difficult no control using conventional techniques. Here, a novel design of an adaptive neuro fuzzy inference strategy (ANFIS) for controlling input displacement of a new adaptive compliant gripper is presented. This design of the gripper has embedded sensors as part of its structure. The use of embedded sensors in a robot gripper gives the control system the ability to control input displacement of the gripper and to recognize particular shapes of the grasping objects. Since the conventional control strategy is a very challenging task, fuzzy logic based controllers are considered as potential candidates for such an application. Fuzzy based controllers develop a control signal which yields on the firing of the rule base. The selection of the proper rule base depending on the situation can be achieved by using an ANFIS controller, which becomes an integrated method of approach for the control purposes. In the designed ANFIS scheme, neural network techniques are used to select a proper rule base, which is achieved using the back propagation algorithm. The simulation results presented in this paper show the effectiveness of the developed method. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840127_11",
"text": "It has been widely observed that there is no single “dominant” SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use socalled empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.",
"title": ""
},
{
"docid": "neg:1840127_12",
"text": "Most computer vision and especially segmentation tasks require to extract features that represent local appearance of patches. Relevant features can be further processed by learning algorithms to infer posterior probabilities that pixels belong to an object of interest. Deep Convolutional Neural Networks (CNN) define a particularly successful class of learning algorithms for semantic segmentation, although they proved to be very slow to train even when employing special purpose hardware. We propose, for the first time, a general purpose segmentation algorithm to extract the most informative and interpretable features as convolution kernels while simultaneously building a multivariate decision tree. The algorithm trains several orders of magnitude faster than regular CNNs and achieves state of the art results in processing quality on benchmark datasets.",
"title": ""
},
{
"docid": "neg:1840127_13",
"text": "Analog-to-digital converters are essential building blocks in modern electronic systems. They form the critical link between front-end analog transducers and back-end digital computers that can efficiently implement a wide variety of signal-processing functions. The wide variety of digitalsignal-processing applications leads to the availability of a wide variety of analog-to-digital (A/D) converters of varying price, performance, and quality. Ideally, an A/D converter encodes a continuous-time analog input voltage, VIN , into a series of discrete N -bit digital words that satisfy the relation",
"title": ""
},
{
"docid": "neg:1840127_14",
"text": "Gummy smile constitutes a relatively frequent aesthetic alteration characterized by excessive exhibition of the gums during smiling movements of the upper lip. It is the result of an inadequate relation between the lower edge of the upper lip, the positioning of the anterosuperior teeth, the location of the upper jaw, and the gingival margin position with respect to the dental crown. Altered Passive Eruption (APE) is a clinical situation produced by excessive gum overlapping over the enamel limits, resulting in a short clinical crown appearance, that gives the sensation of hidden teeth. The term itself suggests the causal mechanism, i.e., failure in the passive phase of dental eruption, though there is no scientific evidence to support this. While there are some authors who consider APE to be a risk situation for periodontal health, its clearest clinical implication refers to oral esthetics. APE is a factor that frequently contributes to the presence of a gummy or gingival smile, and it can easily be corrected by periodontal surgery. Nevertheless, it is essential to establish a correct differential diagnosis and good treatment plan. A literature review is presented of the dental eruption process, etiological hypotheses of APE, its morphologic classification, and its clinical relevance.",
"title": ""
},
{
"docid": "neg:1840127_15",
"text": "Part of the ventral temporal lobe is thought to be critical for face perception, but what determines this specialization remains unknown. We present evidence that expertise recruits the fusiform gyrus 'face area'. Functional magnetic resonance imaging (fMRI) was used to measure changes associated with increasing expertise in brain areas selected for their face preference. Acquisition of expertise with novel objects (greebles) led to increased activation in the right hemisphere face areas for matching of upright greebles as compared to matching inverted greebles. The same areas were also more activated in experts than in novices during passive viewing of greebles. Expertise seems to be one factor that leads to specialization in the face area.",
"title": ""
},
{
"docid": "neg:1840127_16",
"text": "Humanoid robotics is attracting the interest of many research groups world-wide. In particular, developing humanoids requires the implementation of manipulation capabilities, which is still a most complex problem in robotics. This paper presents an overview of current activities in the development of humanoid robots, with special focus on manipulation. Then we discuss our current approach to the design and development of anthropomorphic sensorized hand and of anthropomorphic control and sensory-motor coordination schemes. Current achievements in the development of a robotic human hand prosthesis are described, together with preliminary experimental results, as well as in the implementation of biologically-inspired schemes for control and sensory-motor co-ordination in manipulation, derived from models of well-identified human brain areas.",
"title": ""
},
{
"docid": "neg:1840127_17",
"text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013",
"title": ""
},
{
"docid": "neg:1840127_18",
"text": "This paper describes the architecture and implementation of a shortest path processor, both in reconfigurable hardware and VLSI. This processor is based on the principles of recurrent spatiotemporal neural network. The processor’s operation is similar to Dijkstra’s algorithm and it can be used for network routing calculations. The objective of the processor is to find the least cost path in a weighted graph between a given node and one or more destinations. The digital implementation exhibits a regular interconnect structure and uses simple processing elements, which is well suited for VLSI implementation and reconfigurable hardware.",
"title": ""
}
] |
1840128 | A REVIEW ON IMAGE SEGMENTATION TECHNIQUES WITH REMOTE SENSING PERSPECTIVE | [
{
"docid": "pos:1840128_0",
"text": "A~tract--For the past decade, many image segmentation techniques have been proposed. These segmentation techniques can be categorized into three classes, (I) characteristic feature thresholding or clustering, (2) edge detection, and (3) region extraction. This survey summarizes some of these techniques, in the area of biomedical image segmentation, most proposed techniques fall into the categories of characteristic feature thresholding or clustering and edge detection.",
"title": ""
},
{
"docid": "pos:1840128_1",
"text": "Remote sensing from airborne and spaceborne platforms provides valuable data for mapping, environmental monitoring, disaster management and civil and military intelligence. However, to explore the full value of these data, the appropriate information has to be extracted and presented in standard format to import it into geo-information systems and thus allow efficient decision processes. The object-oriented approach can contribute to powerful automatic and semiautomatic analysis for most remote sensing applications. Synergetic use to pixel-based or statistical signal processing methods explores the rich information contents. Here, we explain principal strategies of object-oriented analysis, discuss how the combination with fuzzy methods allows implementing expert knowledge and describe a representative example for the proposed workflow from remote sensing imagery to GIS. The strategies are demonstrated using the first objectoriented image analysis software on the market, eCognition, which provides an appropriate link between remote sensing",
"title": ""
}
] | [
{
"docid": "neg:1840128_0",
"text": "In this paper, we use variational recurrent neural network to investigate the anomaly detection problem on graph time series. The temporal correlation is modeled by the combination of recurrent neural network (RNN) and variational inference (VI), while the spatial information is captured by the graph convolutional network. In order to incorporate external factors, we use feature extractor to augment the transition of latent variables, which can learn the influence of external factors. With the target function as accumulative ELBO, it is easy to extend this model to on-line method. The experimental study on traffic flow data shows the detection capability of the proposed method.",
"title": ""
},
{
"docid": "neg:1840128_1",
"text": "Back propagation training algorithms have been implemented by many researchers for their own purposes and provided publicly on the internet for others to use in veriication of published results and for reuse in unrelated research projects. Often, the source code of a package is used as the basis for a new package for demonstrating new algorithm variations, or some functionality is added speciically for analysis of results. However, there are rarely any guarantees that the original implementation is faithful to the algorithm it represents, or that its code is bug free or accurate. This report attempts to look at a few implementations and provide a test suite which shows deeciencies in some software available which the average researcher may not be aware of, and may not have the time to discover on their own. This test suite may then be used to test the correctness of new packages.",
"title": ""
},
{
"docid": "neg:1840128_2",
"text": "Despite the proliferation of mobile health applications, few target low literacy users. This is a matter of concern because 43% of the United States population is functionally illiterate. To empower everyone to be a full participant in the evolving health system and prevent further disparities, we must understand the design needs of low literacy populations. In this paper, we present two complementary studies of four graphical user interface (GUI) widgets and three different cross-page navigation styles in mobile applications with a varying literacy, chronically-ill population. Participant's navigation and interaction styles were documented while they performed search tasks using high fidelity prototypes running on a mobile device. Results indicate that participants could use any non-text based GUI widgets. For navigation structures, users performed best when navigating a linear structure, but preferred the features of cross-linked navigation. Based on these findings, we provide some recommendations for designing accessible mobile applications for varying-literacy populations.",
"title": ""
},
{
"docid": "neg:1840128_3",
"text": "Resistive-switching memory (RRAM) based on transition metal oxides is a potential candidate for replacing Flash and dynamic random access memory in future generation nodes. Although very promising from the standpoints of scalability and technology, RRAM still has severe drawbacks in terms of understanding and modeling of the resistive-switching mechanism. This paper addresses the modeling of resistive switching in bipolar metal-oxide RRAMs. Reset and set processes are described in terms of voltage-driven ion migration within a conductive filament generated by electroforming. Ion migration is modeled by drift–diffusion equations with Arrhenius-activated diffusivity and mobility. The local temperature and field are derived from the self-consistent solution of carrier and heat conduction equations in a 3-D axis-symmetric geometry. The model accounts for set–reset characteristics, correctly describing the abrupt set and gradual reset transitions and allowing scaling projections for metal-oxide RRAM.",
"title": ""
},
{
"docid": "neg:1840128_4",
"text": "We consider the problem of a robot learning the mechanical properties of objects through physical interaction with the object, and introduce a practical, data-efficient approach for identifying the motion models of these objects. The proposed method utilizes a physics engine, where the robot seeks to identify the inertial and friction parameters of the object by simulating its motion under different values of the parameters and identifying those that result in a simulation which matches the observed real motions. The problem is solved in a Bayesian optimization framework. The same framework is used for both identifying the model of an object online and searching for a policy that would minimize a given cost function according to the identified model. Experimental results both in simulation and using a real robot indicate that the proposed method outperforms state-of-the-art model-free reinforcement learning approaches.",
"title": ""
},
{
"docid": "neg:1840128_5",
"text": "INTRODUCTION\nVaginismus is mostly unknown among clinicians and women. Vaginismus causes women to have fear, anxiety, and pain with penetration attempts.\n\n\nAIM\nTo present a large cohort of patients based on prior published studies approved by an institutional review board and the Food and Drug Administration using a comprehensive multimodal vaginismus treatment program to treat the physical and psychologic manifestations of women with vaginismus and to record successes, failures, and untoward effects of this treatment approach.\n\n\nMETHODS\nAssessment of vaginismus included a comprehensive pretreatment questionnaire, the Female Sexual Function Index (FSFI), and consultation. All patients signed a detailed informed consent. Treatment consisted of a multimodal approach including intravaginal injections of onabotulinumtoxinA (Botox) and bupivacaine, progressive dilation under conscious sedation, indwelling dilator, follow-up and support with office visits, phone calls, e-mails, dilation logs, and FSFI reports.\n\n\nMAIN OUTCOME MEASURES\nLogs noting dilation progression, pain and anxiety scores, time to achieve intercourse, setbacks, and untoward effects. Post-treatment FSFI scores were compared with preprocedure scores.\n\n\nRESULTS\nOne hundred seventy-one patients (71%) reported having pain-free intercourse at a mean of 5.1 weeks (median = 2.5). Six patients (2.5%) were unable to achieve intercourse within a 1-year period after treatment and 64 patients (26.6%) were lost to follow-up. The change in the overall FSFI score measured at baseline, 3 months, 6 months, and 1 year was statistically significant at the 0.05 level. Three patients developed mild temporary stress incontinence, two patients developed a short period of temporary blurred vision, and one patient developed temporary excessive vaginal dryness. All adverse events resolved by approximately 4 months. One patient required retreatment followed by successful coitus.\n\n\nCONCLUSION\nA multimodal program that treated the physical and psychologic aspects of vaginismus enabled women to achieve pain-free intercourse as noted by patient communications and serial female sexual function studies. Further studies are indicated to better understand the individual components of this multimodal treatment program. Pacik PT, Geletta S. Vaginismus Treatment: Clinical Trials Follow Up 241 Patients. Sex Med 2017;5:e114-e123.",
"title": ""
},
{
"docid": "neg:1840128_6",
"text": "We extend Convolutional Neural Networks (CNNs) on flat and regular domains (e.g. 2D images) to curved 2D manifolds embedded in 3D Euclidean space that are discretized as irregular surface meshes and widely used to represent geometric data in Computer Vision and Graphics. We define surface convolution on tangent spaces of a surface domain, where the convolution has two desirable properties: 1) the distortion of surface domain signals is locally minimal when being projected to the tangent space, and 2) the translation equi-variance property holds locally, by aligning tangent spaces for neighboring points with the canonical torsion-free parallel transport that preserves tangent space metric. To implement such a convolution, we rely on a parallel N -direction frame field on the surface that minimizes the field variation and therefore is as compatible as possible to and approximates the parallel transport. On the tangent spaces equipped with parallel frames, the computation of surface convolution becomes standard routine. The tangential frames have N rotational symmetry that must be disambiguated, which we resolve by duplicating the surface domain to construct its covering space induced by the parallel frames and grouping the feature maps into N sets accordingly; each surface convolution is computed on the N branches of the cover space with their respective feature maps while the kernel weights are shared. To handle the irregular data points of a discretized surface mesh while being able to share trainable kernel weights, we make the convolution semi-discrete, i.e. the convolution kernels are smooth polynomial functions, and their convolution with discrete surface data points becomes discrete sampling and weighted summation. In addition, pooling and unpooling operations for surface CNNs on a mesh are computed along the mesh hierarchy built through simplification. The presented surface-based CNNs allow us to do effective deep learning on surface meshes using network structures very similar to those for flat and regular domains. In particular, we show that for various tasks, including classification, segmentation and non-rigid registration, surface CNNs using only raw input signals achieve superior performances than other neural network models using sophisticated pre-computed input features, and enable a simple non-rigid human-body registration procedure by regressing to restpose positions directly.",
"title": ""
},
{
"docid": "neg:1840128_7",
"text": "Convolutional Neural Networks (CNNs) have been widely used for face recognition and got extraordinary performance with large number of available face images of different people. However, it is hard to get uniform distributed data for all people. In most face datasets, a large proportion of people have few face images. Only a small number of people appear frequently with more face images. These people with more face images have higher impact on the feature learning than others. The imbalanced distribution leads to the difficulty to train a CNN model for feature representation that is general for each person, instead of mainly for the people with large number of face images. To address this challenge, we proposed a center invariant loss which aligns the center of each person to enforce the learned features to have a general representation for all people. The center invariant loss penalizes the difference between each center of classes. With center invariant loss, we can train a robust CNN that treats each class equally regardless the number of class samples. Extensive experiments demonstrate the effectiveness of the proposed approach. We achieve state-of-the-art results on LFW and YTF datasets.",
"title": ""
},
{
"docid": "neg:1840128_8",
"text": "Hand written digit recognition is highly nonlinear problem. Recognition of handwritten numerals plays an active role in day to day life now days. Office automation, e-governors and many other areas, reading printed or handwritten documents and convert them to digital media is very crucial and time consuming task. So the system should be designed in such a way that it should be capable of reading handwritten numerals and provide appropriate response as humans do. However, handwritten digits are varying from person to person because each one has their own style of writing, means the same digit or character/word written by different writer will be different even in different languages. This paper presents survey on handwritten digit recognition systems with recent techniques, with three well known classifiers namely MLP, SVM and k-NN used for classification. This paper presents comparative analysis that describes recent methods and helps to find future scope.",
"title": ""
},
{
"docid": "neg:1840128_9",
"text": "In this paper, interval-valued fuzzy planar graphs are defined and several properties are studied. The interval-valued fuzzy graphs are more efficient than fuzzy graphs, since the degree of membership of vertices and edges lie within the interval [0, 1] instead at a point in fuzzy graphs. We also use the term ‘degree of planarity’ to measures the nature of planarity of an interval-valued fuzzy graph. The other relevant terms such as strong edges, interval-valued fuzzy faces, strong interval-valued fuzzy faces are defined here. The interval-valued fuzzy dual graph which is closely associated to the interval-valued fuzzy planar graph is defined. Several properties of interval-valued fuzzy dual graph are also studied. An example of interval-valued fuzzy planar graph is given.",
"title": ""
},
{
"docid": "neg:1840128_10",
"text": "Nowadays, smart composite materials embed miniaturized sensors for structural health monitoring (SHM) in order to mitigate the risk of failure due to an overload or to unwanted inhomogeneity resulting from the fabrication process. Optical fiber sensors, and more particularly fiber Bragg grating (FBG) sensors, outperform traditional sensor technologies, as they are lightweight, small in size and offer convenient multiplexing capabilities with remote operation. They have thus been extensively associated to composite materials to study their behavior for further SHM purposes. This paper reviews the main challenges arising from the use of FBGs in composite materials. The focus will be made on issues related to temperature-strain discrimination, demodulation of the amplitude spectrum during and after the curing process as well as connection between the embedded optical fibers and the surroundings. The main strategies developed in each of these three topics will be summarized and compared, demonstrating the large progress that has been made in this field in the past few years.",
"title": ""
},
{
"docid": "neg:1840128_11",
"text": "In the present study, we tested in vitro different parts of 35 plants used by tribals of the Similipal Biosphere Reserve (SBR, Mayurbhanj district, India) for the management of infections. From each plant, three extracts were prepared with different solvents (water, ethanol, and acetone) and tested for antimicrobial (E. coli, S. aureus, C. albicans); anthelmintic (C. elegans); and antiviral (enterovirus 71) bioactivity. In total, 35 plant species belonging to 21 families were recorded from tribes of the SBR and periphery. Of the 35 plants, eight plants (23%) showed broad-spectrum in vitro antimicrobial activity (inhibiting all three test strains), while 12 (34%) exhibited narrow spectrum activity against individual pathogens (seven as anti-staphylococcal and five as anti-candidal). Plants such as Alangium salviifolium, Antidesma bunius, Bauhinia racemosa, Careya arborea, Caseria graveolens, Cleistanthus patulus, Colebrookea oppositifolia, Crotalaria pallida, Croton roxburghii, Holarrhena pubescens, Hypericum gaitii, Macaranga peltata, Protium serratum, Rubus ellipticus, and Suregada multiflora showed strong antibacterial effects, whilst Alstonia scholaris, Butea monosperma, C. arborea, C. pallida, Diospyros malbarica, Gmelina arborea, H. pubescens, M. peltata, P. serratum, Pterospermum acerifolium, R. ellipticus, and S. multiflora demonstrated strong antifungal activity. Plants such as A. salviifolium, A. bunius, Aporosa octandra, Barringtonia acutangula, C. graveolens, C. pallida, C. patulus, G. arborea, H. pubescens, H. gaitii, Lannea coromandelica, M. peltata, Melastoma malabathricum, Millettia extensa, Nyctanthes arbor-tristis, P. serratum, P. acerifolium, R. ellipticus, S. multiflora, Symplocos cochinchinensis, Ventilago maderaspatana, and Wrightia arborea inhibit survival of C. elegans and could be a potential source for anthelmintic activity. Additionally, plants such as A. bunius, C. graveolens, C. patulus, C. oppositifolia, H. gaitii, M. extensa, P. serratum, R. ellipticus, and V. maderaspatana showed anti-enteroviral activity. Most of the plants, whose traditional use as anti-infective agents by the tribals was well supported, show in vitro inhibitory activity against an enterovirus, bacteria (E. coil, S. aureus), a fungus (C. albicans), or a nematode (C. elegans).",
"title": ""
},
{
"docid": "neg:1840128_12",
"text": "Device-to-device communications enable two proximity users to transmit signal directly without going through the base station. It can increase network spectral efficiency and energy efficiency, reduce transmission delay, offload traffic for the BS, and alleviate congestion in the cellular core networks. However, many technical challenges need to be addressed for D2D communications to harvest the potential benefits, including device discovery and D2D session setup, D2D resource allocation to guarantee QoS, D2D MIMO transmission, as well as D2D-aided BS deployment in heterogeneous networks. In this article, the basic concepts of D2D communications are first introduced, and then existing fundamental works on D2D communications are discussed. In addition, some potential research topics and challenges are also identified.",
"title": ""
},
{
"docid": "neg:1840128_13",
"text": "Description\nThe American College of Physicians (ACP) developed this guideline to present the evidence and provide clinical recommendations on the management of gout.\n\n\nMethods\nUsing the ACP grading system, the committee based these recommendations on a systematic review of randomized, controlled trials; systematic reviews; and large observational studies published between January 2010 and March 2016. Clinical outcomes evaluated included pain, joint swelling and tenderness, activities of daily living, patient global assessment, recurrence, intermediate outcomes of serum urate levels, and harms.\n\n\nTarget Audience and Patient Population\nThe target audience for this guideline includes all clinicians, and the target patient population includes adults with acute or recurrent gout.\n\n\nRecommendation 1\nACP recommends that clinicians choose corticosteroids, nonsteroidal anti-inflammatory drugs (NSAIDs), or colchicine to treat patients with acute gout. (Grade: strong recommendation, high-quality evidence).\n\n\nRecommendation 2\nACP recommends that clinicians use low-dose colchicine when using colchicine to treat acute gout. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 3\nACP recommends against initiating long-term urate-lowering therapy in most patients after a first gout attack or in patients with infrequent attacks. (Grade: strong recommendation, moderate-quality evidence).\n\n\nRecommendation 4\nACP recommends that clinicians discuss benefits, harms, costs, and individual preferences with patients before initiating urate-lowering therapy, including concomitant prophylaxis, in patients with recurrent gout attacks. (Grade: strong recommendation, moderate-quality evidence).",
"title": ""
},
{
"docid": "neg:1840128_14",
"text": "Negative emotions are reliably associated with poorer health (e.g., Kiecolt-Glaser, McGuire, Robles, & Glaser, 2002), but only recently has research begun to acknowledge the important role of positive emotions for our physical health (Fredrickson, 2003). We examine the link between dispositional positive affect and one potential biological pathway between positive emotions and health-proinflammatory cytokines, specifically levels of interleukin-6 (IL-6). We hypothesized that greater trait positive affect would be associated with lower levels of IL-6 in a healthy sample. We found support for this hypothesis across two studies. We also explored the relationship between discrete positive emotions and IL-6 levels, finding that awe, measured in two different ways, was the strongest predictor of lower levels of proinflammatory cytokines. These effects held when controlling for relevant personality and health variables. This work suggests a potential biological pathway between positive emotions and health through proinflammatory cytokines.",
"title": ""
},
{
"docid": "neg:1840128_15",
"text": "Single-phase power factor correction (PFC) ac-dc converters are widely used in the industry for ac-dc power conversion from single phase ac-mains to an isolated output dc voltage. Typically, for high-power applications, such converters use an ac-dc boost input converter followed by a dc-dc full-bridge converter. A new ac-dc single-stage high-power universal PFC ac input full-bridge, pulse-width modulated converter is proposed in this paper. The converter can operate with an excellent input power factor, continuous input and output currents, and a non-excessive intermediate dc bus voltage and has reduced number of semiconductor devices thus presenting a cost-effective novel solution for such applications. In this paper, the operation of the proposed converter is explained, a steady-state analysis of its operation is performed, and the results of the analysis are used to develop a procedure for its design. The operation of the proposed converter is confirmed with results obtained from an experimental prototype.",
"title": ""
},
{
"docid": "neg:1840128_16",
"text": "With the rapid development of the Computer Science and Technology, It has become a major problem for the users that how to quickly find useful or needed information. Text categorization can help people to solve this question. The feature selection method has become one of the most critical techniques in the field of the text automatic categorization. A new method of the text feature selection based on Information Gain and Genetic Algorithm is proposed in this paper. This method chooses the feature based on information gain with the frequency of items. Meanwhile, for the information filtering systems, this method has been improved fitness function to fully consider the characteristics of weight, text and vector similarity dimension, etc. The experiment has proved that the method can reduce the dimension of text vector and improve the precision of text classification.",
"title": ""
},
{
"docid": "neg:1840128_17",
"text": "In this paper we apply the Local Binary Pattern on Three Orthogonal Planes (LBP-TOP) descriptor to the field of human action recognition. A video sequence is described as a collection of spatial-temporal words after the detection of space-time interest points and the description of the area around them. Our contribution has been in the description part, showing LBP-TOP to be a promising descriptor for human action classification purposes. We have also developed several extensions to the descriptor to enhance its performance in human action recognition, showing the method to be computationally efficient.",
"title": ""
},
{
"docid": "neg:1840128_18",
"text": "Silicone oils have wide range of applications in personal care products due to their unique properties of high lubricity, non-toxicity, excessive spreading and film formation. They are usually employed in the form of emulsions due to their inert nature. Until now, different conventional emulsification techniques have been developed and applied to prepare silicone oil emulsions. The size and uniformity of emulsions showed important influence on stability of droplets, which further affect the application performance. Therefore, various strategies were developed to improve the stability as well as application performance of silicone oil emulsions. In this review, we highlight different factors influencing the stability of silicone oil emulsions and explain various strategies to overcome the stability problems. In addition, the silicone deposition on the surface of hair substrates and different approaches to increase their deposition are also discussed in detail.",
"title": ""
},
{
"docid": "neg:1840128_19",
"text": "The dense very deep submicron (VDSM) system on chips (SoC) face a serious limitation in performance due to reverse scaling of global interconnects. Interconnection techniques which decrease delay, delay variation and ensure signal integrity, play an important role in the growth of the semiconductor industry into future generations. Current-mode low-swing interconnection techniques provide an attractive alternative to conventional full-swing voltage mode signaling in terms of delay, power and noise immunity. In this paper, we present a new current-mode low-swing interconnection technique which reduces the delay and delay variations in global interconnects. Extensive simulations for performance of our circuit under crosstalk, supply voltage, process and temperature variations were performed. The results indicate significant savings in power, reduction in delay and increase in noise immunity compared to other techniques.",
"title": ""
}
] |
1840129 | Polyglot Neural Language Models: A Case Study in Cross-Lingual Phonetic Representation Learning | [
{
"docid": "pos:1840129_0",
"text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"title": ""
},
{
"docid": "pos:1840129_1",
"text": "In this paper, we investigate the application of recurrent neural network language models (RNNLM) and factored language models (FLM) to the task of language modeling for Code-Switching speech. We present a way to integrate partof-speech tags (POS) and language information (LID) into these models which leads to significant improvements in terms of perplexity. Furthermore, a comparison between RNNLMs and FLMs and a detailed analysis of perplexities on the different backoff levels are performed. Finally, we show that recurrent neural networks and factored language models can be combined using linear interpolation to achieve the best performance. The final combined language model provides 37.8% relative improvement in terms of perplexity on the SEAME development set and a relative improvement of 32.7% on the evaluation set compared to the traditional n-gram language model.",
"title": ""
}
] | [
{
"docid": "neg:1840129_0",
"text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.",
"title": ""
},
{
"docid": "neg:1840129_1",
"text": "This article presents a 4:1 wide-band balun that won the student design competition for wide-band baluns held during the 2016 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2016) in San Francisco, California. For this contest, sponsored by Technical Committee MTT-17, participants were required to implement and evaluate their own baluns, with the winning entry achieving the widest bandwidth while satisfying the conditions of the competition rules during measurements at IMS2016. Some of the conditions were revised for this year's competition compared with previous competitions as follows.",
"title": ""
},
{
"docid": "neg:1840129_2",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a detailed study of 11 widely used internal clustering validation measures for crisp clustering. From five conventional aspects of clustering, we investigate their validation properties. Experiment results show that S\\_Dbw is the only internal validation measure which performs well in all five aspects, while other measures have certain limitations in different application scenarios.",
"title": ""
},
{
"docid": "neg:1840129_3",
"text": "The importance of motion in attracting attention is well known. While watching videos, where motion is prevalent, how do we quantify the regions that are motion salient? In this paper, we investigate the role of motion in attention and compare it with the influence of other low-level features like image orientation and intensity. We propose a framework for motion saliency. In particular, we integrate motion vector information with spatial and temporal coherency to generate a motion attention map. The results show that our model achieves good performance in identifying regions that are moving and salient. We also find motion to have greater influence on saliency than other low-level features when watching videos.",
"title": ""
},
{
"docid": "neg:1840129_4",
"text": "Data that encompasses relationships is represented by a graph of interconnected nodes. Social network analysis is the study of such graphs which examines questions related to structures and patterns that can lead to the understanding of the data and predicting the trends of social networks. Static analysis, where the time of interaction is not considered (i.e., the network is frozen in time), misses the opportunity to capture the evolutionary patterns in dynamic networks. Specifically, detecting the community evolutions, the community structures that changes in time, provides insight into the underlying behaviour of the network. Recently, a number of researchers have started focusing on identifying critical events that characterize the evolution of communities in dynamic scenarios. In this paper, we present a framework for modeling and detecting community evolution in social networks, where a series of significant events is defined for each community. A community matching algorithm is also proposed to efficiently identify and track similar communities over time. We also define the concept of meta community which is a series of similar communities captured in different timeframes and detected by our matching algorithm. We illustrate the capabilities and potential of our framework by applying it to two real datasets. Furthermore, the events detected by the framework is supplemented by extraction and investigation of the topics discovered for each community. c © 2011 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "neg:1840129_5",
"text": "In the past decade, graph-based structures have penetrated nearly every aspect of our lives. The detection of anomalies in these networks has become increasingly important, such as in exposing infected endpoints in computer networks or identifying socialbots. In this study, we present a novel unsupervised two-layered meta-classifier that can detect irregular vertices in complex networks solely by utilizing topology-based features. Following the reasoning that a vertex with many improbable links has a higher likelihood of being anomalous, we applied our method on 10 networks of various scales, from a network of several dozen students to online networks with millions of vertices. In every scenario, we succeeded in identifying anomalous vertices with lower false positive rates and higher AUCs compared to other prevalent methods. Moreover, we demonstrated that the presented algorithm is generic, and efficient both in revealing fake users and in disclosing the influential people in social networks.",
"title": ""
},
{
"docid": "neg:1840129_6",
"text": "Computing systems have steadily evolved into more complex, interconnected, heterogeneous entities. Ad-hoc techniques are most often used in designing them. Furthermore, researchers and designers from both academia and industry have focused on vertical approaches to emphasizing the advantages of one specific feature such as fault tolerance, security or performance. Such approaches led to very specialized computing systems and applications. Autonomic systems, as an alternative approach, can control and manage themselves automatically with minimal intervention by users or system administrators. This paper presents an autonomic framework in developing and implementing autonomic computing services and applications. Firstly, it shows how to apply this framework to autonomically manage the security of networks. Then an approach is presented to develop autonomic components from existing legacy components such as software modules/applications or hardware resources (router, processor, server, etc.). Experimental evaluation of the prototype shows that the system can be programmed dynamically to enable the components to operate autonomously.",
"title": ""
},
{
"docid": "neg:1840129_7",
"text": "Database forensics is a domain that uses database content and metadata to reveal malicious activities on database systems in an Internet of Things environment. Although the concept of database forensics has been around for a while, the investigation of cybercrime activities and cyber breaches in an Internet of Things environment would benefit from the development of a common investigative standard that unifies the knowledge in the domain. Therefore, this paper proposes common database forensic investigation processes using a design science research approach. The proposed process comprises four phases, namely: 1) identification; 2) artefact collection; 3) artefact analysis; and 4) the documentation and presentation process. It allows the reconciliation of the concepts and terminologies of all common database forensic investigation processes; hence, it facilitates the sharing of knowledge on database forensic investigation among domain newcomers, users, and practitioners.",
"title": ""
},
{
"docid": "neg:1840129_8",
"text": "Wetlands all over the world have been lost or are threatened in spite of various international agreements and national policies. This is caused by: (1) the public nature of many wetlands products and services; (2) user externalities imposed on other stakeholders; and (3) policy intervention failures that are due to a lack of consistency among government policies in different areas (economics, environment, nature protection, physical planning, etc.). All three causes are related to information failures which in turn can be linked to the complexity and ‘invisibility’ of spatial relationships among groundwater, surface water and wetland vegetation. Integrated wetland research combining social and natural sciences can help in part to solve the information failure to achieve the required consistency across various government policies. An integrated wetland research framework suggests that a combination of economic valuation, integrated modelling, stakeholder analysis, and multi-criteria evaluation can provide complementary insights into sustainable and welfare-optimising wetland management and policy. Subsequently, each of the various www.elsevier.com/locate/ecolecon * Corresponding author. Tel.: +46-8-6739540; fax: +46-8-152464. E-mail address: tore@beijer.kva.se (T. Söderqvist). 0921-8009/00/$ see front matter © 2000 Elsevier Science B.V. All rights reserved. PII: S 0921 -8009 (00 )00164 -6 R.K. Turner et al. / Ecological Economics 35 (2000) 7–23 8 components of such integrated wetland research is reviewed and related to wetland management policy. © 2000 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840129_9",
"text": "Today’s Cyber-Physical Systems (CPSs) are large, complex, and affixed with networked sensors and actuators that are targets for cyber-attacks. Conventional detection techniques are unable to deal with the increasingly dynamic and complex nature of the CPSs. On the other hand, the networked sensors and actuators generate large amounts of data streams that can be continuously monitored for intrusion events. Unsupervised machine learning techniques can be used to model the system behaviour and classify deviant behaviours as possible attacks. In this work, we proposed a novel Generative Adversarial Networks-based Anomaly Detection (GAN-AD) method for such complex networked CPSs. We used LSTM-RNN in our GAN to capture the distribution of the multivariate time series of the sensors and actuators under normal working conditions of a CPS. Instead of treating each sensor’s and actuator’s time series independently, we model the time series of multiple sensors and actuators in the CPS concurrently to take into account of potential latent interactions between them. To exploit both the generator and the discriminator of our GAN, we deployed the GAN-trained discriminator together with the residuals between generator-reconstructed data and the actual samples to detect possible anomalies in the complex CPS. We used our GAN-AD to distinguish abnormal attacked situations from normal working conditions for a complex six-stage Secure Water Treatment (SWaT) system. Experimental results showed that the proposed strategy is effective in identifying anomalies caused by various attacks with high detection rate and low false positive rate as compared to existing methods.",
"title": ""
},
{
"docid": "neg:1840129_10",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "neg:1840129_11",
"text": "The evolution of modern electronic devices is outpacing the scalability and effectiveness of the tools used to analyze digital evidence recovered from them. Indeed, current digital forensic techniques and tools are unable to handle large datasets in an efficient manner. As a result, the time and effort required to conduct digital forensic investigations are increasing. This paper describes a promising digital forensic visualization framework that displays digital evidence in a simple and intuitive manner, enhancing decision making and facilitating the explanation of phenomena in evidentiary data.",
"title": ""
},
{
"docid": "neg:1840129_12",
"text": "During even the most quiescent behavioral periods, the cortex and thalamus express rich spontaneous activity in the form of slow (<1 Hz), synchronous network state transitions. Throughout this so-called slow oscillation, cortical and thalamic neurons fluctuate between periods of intense synaptic activity (Up states) and almost complete silence (Down states). The two decades since the original characterization of the slow oscillation in the cortex and thalamus have seen considerable advances in deciphering the cellular and network mechanisms associated with this pervasive phenomenon. There are, nevertheless, many questions regarding the slow oscillation that await more thorough illumination, particularly the mechanisms by which Up states initiate and terminate, the functional role of the rhythmic activity cycles in unconscious or minimally conscious states, and the precise relation between Up states and the activated states associated with waking behavior. Given the substantial advances in multineuronal recording and imaging methods in both in vivo and in vitro preparations, the time is ripe to take stock of our current understanding of the slow oscillation and pave the way for future investigations of its mechanisms and functions. My aim in this Review is to provide a comprehensive account of the mechanisms and functions of the slow oscillation, and to suggest avenues for further exploration.",
"title": ""
},
{
"docid": "neg:1840129_13",
"text": "Grounded Theory is a powerful research method for collecting and analysing research data. It was ‘discovered’ by Glaser & Strauss (1967) in the 1960s but is still not widely used or understood by researchers in some industries or PhD students in some science disciplines. This paper demonstrates the steps in the method and describes the difficulties encountered in applying Grounded Theory (GT). A fundamental part of the analysis method in GT is the derivation of codes, concepts and categories. Codes and coding are explained and illustrated in Section 3. Merging the codes to discover emerging concepts is a central part of the GT method and is shown in Section 4. Glaser and Strauss’s constant comparison step is applied and illustrated so that the emerging categories can be seen coming from the concepts and leading to the emergent theory grounded in the data in Section 5. However, the initial applications of the GT method did have difficulties. Problems encountered when using the method are described to inform the reader of the realities of the approach. The data used in the illustrative analysis comes from recent IS/IT Case Study research into configuration management (CM) and the use of commercially available computer products (COTS). Why and how the GT approach was appropriate is explained in Section 6. However, the focus is on reporting GT as a research method rather than the results of the Case Study.",
"title": ""
},
{
"docid": "neg:1840129_14",
"text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.",
"title": ""
},
{
"docid": "neg:1840129_15",
"text": "Concept-to-text generation refers to the task of automatically producing textual output from non-linguistic input. We present a joint model that captures content selection (“what to say”) and surface realization (“how to say”) in an unsupervised domain-independent fashion. Rather than breaking up the generation process into a sequence of local decisions, we define a probabilistic context-free grammar that globally describes the inherent structure of the input (a corpus of database records and text describing some of them). We recast generation as the task of finding the best derivation tree for a set of database records and describe an algorithm for decoding in this framework that allows to intersect the grammar with additional information capturing fluency and syntactic well-formedness constraints. Experimental evaluation on several domains achieves results competitive with state-of-the-art systems that use domain specific constraints, explicit feature engineering or labeled data.",
"title": ""
},
{
"docid": "neg:1840129_16",
"text": "1. We studied the responses of 103 neurons in visual area V4 of anesthetized macaque monkeys to two novel classes of visual stimuli, polar and hyperbolic sinusoidal gratings. We suspected on both theoretical and experimental grounds that these stimuli would be useful for characterizing cells involved in intermediate stages of form analysis. Responses were compared with those obtained with conventional Cartesian sinusoidal gratings. Five independent, quantitative analyses of neural responses were carried out on the entire population of cells. 2. For each cell, responses to the most effective Cartesian, polar, and hyperbolic grating were compared directly. In 18 of 103 cells, the peak response evoked by one stimulus class was significantly different from the peak response evoked by the remaining two classes. Of the remaining 85 cells, 74 had response peaks for the three stimulus classes that were all within a factor of 2 of one another. 3. An information-theoretic analysis of the trial-by-trial responses to each stimulus showed that all but two cells transmitted significant information about the stimulus set as a whole. Comparison of the information transmitted about each stimulus class showed that 23 of 103 cells transmitted a significantly different amount of information about one class than about the remaining two classes. Of the remaining 80 cells, 55 had information transmission rates for the three stimulus classes that were all within a factor of 2 of one another. 4. To identify cells that had orderly tuning profiles in the various stimulus spaces, responses to each stimulus class were fit with a simple Gaussian model. Tuning curves were successfully fit to the data from at least one stimulus class in 98 of 103 cells, and such fits were obtained for at least two classes in 87 cells. Individual neurons showed a wide range of tuning profiles, with response peaks scattered throughout the various stimulus spaces; there were no major differences in the distributions of the widths or positions of tuning curves obtained for the different stimulus classes. 5. Neurons were classified according to their response profiles across the stimulus set with two objective methods, hierarchical cluster analysis and multidimensional scaling. These two analyses produced qualitatively similar results. The most distinct group of cells was highly selective for hyperbolic gratings. The majority of cells fell into one of two groups that were selective for polar gratings: one selective for radial gratings and one selective for concentric or spiral gratings. There was no group whose primary selectivity was for Cartesian gratings. 6. To determine whether cells belonging to identified classes were anatomically clustered, we compared the distribution of classified cells across electrode penetrations with the distribution that would be expected if the cells were distributed randomly. Cells with similar response profiles were often anatomically clustered. 7. A position test was used to determine whether response profiles were sensitive to precise stimulus placement. A subset of Cartesian and non-Cartesian gratings was presented at several positions in and near the receptive field. The test was run on 13 cells from the present study and 28 cells from an earlier study. All cells showed a significant degree of invariance in their selectivity across changes in stimulus position of up to 0.5 classical receptive field diameters. 8. A length and width test was used to determine whether cells preferring non-Cartesian gratings were selective for Cartesian grating length or width. Responses to Cartesian gratings shorter or narrower than the classical receptive field were compared with those obtained with full-field Cartesian and non-Cartesian gratings in 29 cells. Of the four cells that had shown significant preferences for non-Cartesian gratings in the main test, none showed tuning for Cartesian grating length or width that would account for their non-Cartesian res",
"title": ""
},
{
"docid": "neg:1840129_17",
"text": "Mobile crowdsensing is becoming a vital technique for environment monitoring, infrastructure management, and social computing. However, deploying mobile crowdsensing applications in large-scale environments is not a trivial task. It creates a tremendous burden on application developers as well as mobile users. In this paper we try to reveal the barriers hampering the scale-up of mobile crowdsensing applications, and to offer our initial thoughts on the potential solutions to lowering the barriers.",
"title": ""
},
{
"docid": "neg:1840129_18",
"text": "Secure identity tokens such as Electronic Identity (eID) cards are emerging everywhere. At the same time user-centric identity management gains acceptance. Anonymous credential schemes are the optimal realization of user-centricity. However, on inexpensive hardware platforms, typically used for eID cards, these schemes could not be made to meet the necessary requirements such as future-proof key lengths and transaction times on the order of 10 seconds. The reasons for this is the need for the hardware platform to be standardized and certified. Therefore an implementation is only possible as a Java Card applet. This results in severe restrictions: little memory (transient and persistent), an 8-bit CPU, and access to hardware acceleration for cryptographic operations only by defined interfaces such as RSA encryption operations.\n Still, we present the first practical implementation of an anonymous credential system on a Java Card 2.2.1. We achieve transaction times that are orders of magnitudes faster than those of any prior attempt, while raising the bar in terms of key length and trust model. Our system is the first one to act completely autonomously on card and to maintain its properties in the face of an untrusted terminal. In addition, we provide a formal system specification and share our solution strategies and experiences gained and with the Java Card.",
"title": ""
},
{
"docid": "neg:1840129_19",
"text": "The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.",
"title": ""
}
] |
1840130 | Case Studies of Damage to Tall Steel Moment-Frame Buildings in Southern California during Large San Andreas Earthquakes | [
{
"docid": "pos:1840130_0",
"text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: krishnan@caltech.edu 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125",
"title": ""
}
] | [
{
"docid": "neg:1840130_0",
"text": "e proliferation of social media in communication and information dissemination has made it an ideal platform for spreading rumors. Automatically debunking rumors at their stage of diusion is known as early rumor detection, which refers to dealing with sequential posts regarding disputed factual claims with certain variations and highly textual duplication over time. us, identifying trending rumors demands an ecient yet exible model that is able to capture long-range dependencies among postings and produce distinct representations for the accurate early detection. However, it is a challenging task to apply conventional classication algorithms to rumor detection in earliness since they rely on hand-craed features which require intensive manual eorts in the case of large amount of posts. is paper presents a deep aention model on the basis of recurrent neural networks (RNN) to learn selectively temporal hidden representations of sequential posts for identifying rumors. e proposed model delves so-aention into the recurrence to simultaneously pool out distinct features with particular focus and produce hidden representations that capture contextual variations of relevant posts over time. Extensive experiments on real datasets collected from social media websites demonstrate that (1) the deep aention based RNN model outperforms state-of-thearts that rely on hand-craed features; (2) the introduction of so aention mechanism can eectively distill relevant parts to rumors from original posts in advance; (3) the proposed method detects rumors more quickly and accurately than competitors.",
"title": ""
},
{
"docid": "neg:1840130_1",
"text": "We show that the Thompson Sampling algorithm achieves logarithmic expected regret for the Bernoulli multi-armed bandit problem. More precisely, for the two-armed bandit problem, the expected regret in time T is O( lnT ∆ + 1 ∆3 ). And, for the N -armed bandit problem, the expected regret in time T is O( [ ( ∑N i=2 1 ∆i ) ] lnT ). Our bounds are optimal but for the dependence on ∆i and the constant factors in big-Oh.",
"title": ""
},
{
"docid": "neg:1840130_2",
"text": "Both intuition and creativity are associated with knowledge creation, yet a clear link between them has not been adequately established. First, the available empirical evidence for an underlying relationship between intuition and creativity is sparse in nature. Further, this evidence is arguable as the concepts are diversely operationalized and the measures adopted are often not validated sufficiently. Combined, these issues make the findings from various studies examining the link between intuition and creativity difficult to replicate. Nevertheless, the role of intuition in creativity should not be neglected as it is often reported to be a core component of the idea generation process, which in conjunction with idea evaluation are crucial phases of creative cognition. We review the prior research findings in respect of idea generation and idea evaluation from the view that intuition can be construed as the gradual accumulation of cues to coherence. Thus, we summarize the literature on what role intuitive processes play in the main stages of the creative problem-solving process and outline a conceptual framework of the interaction between intuition and creativity. Finally, we discuss the main challenges of measuring intuition as well as possible directions for future research.",
"title": ""
},
{
"docid": "neg:1840130_3",
"text": "In this letter, a novel miniaturized periodic element for constructing a bandpass frequency selective surface (FSS) is proposed. Compared to previous miniaturized structures, the FSS proposed has better miniaturization performance with the dimension of a unit cell only 0.061 λ × 0.061 λ , where λ represents the wavelength of the resonant frequency. Moreover, the miniaturization characteristic is stable with respect to different polarizations and incident angles of the waves illuminating. Both simulation and measurement are taken, and the results obtained demonstrate the claimed performance.",
"title": ""
},
{
"docid": "neg:1840130_4",
"text": "Essay scoring is a complicated processing requiring analyzing, summarizing and judging expertise. Traditional work on essay scoring focused on automatic handcrafted features, which are expensive yet sparse. Neural models offer a way to learn syntactic and semantic features automatically, which can potentially improve upon discrete features. In this paper, we employ convolutional neural network (CNN) for the effect of automatically learning features, and compare the result with the state-of-art discrete baselines. For in-domain and domain-adaptation essay scoring tasks, our neural model empirically outperforms discrete models.",
"title": ""
},
{
"docid": "neg:1840130_5",
"text": "This paper introduces concepts and algorithms of feature selection, surveys existing feature selection algorithms for classification and clustering, groups and compares different algorithms with a categorizing framework based on search strategies, evaluation criteria, and data mining tasks, reveals unattempted combinations, and provides guidelines in selecting feature selection algorithms. With the categorizing framework, we continue our efforts toward-building an integrated system for intelligent feature selection. A unifying platform is proposed as an intermediate step. An illustrative example is presented to show how existing feature selection algorithms can be integrated into a meta algorithm that can take advantage of individual algorithms. An added advantage of doing so is to help a user employ a suitable algorithm without knowing details of each algorithm. Some real-world applications are included to demonstrate the use of feature selection in data mining. We conclude this work by identifying trends and challenges of feature selection research and development.",
"title": ""
},
{
"docid": "neg:1840130_6",
"text": "This paper discusses about methods for detection of leukemia. Various image processing techniques are used for identification of red blood cell and immature white cells. Different disease like anemia, leukemia, malaria, deficiency of vitamin B12, etc. can be diagnosed accordingly. Objective is to detect the leukemia affected cells and count it. According to detection of immature blast cells, leukemia can be identified and also define that either it is chronic or acute. To detect immature cells, number of methods are used like histogram equalization, linear contrast stretching, some morphological techniques like area opening, area closing, erosion, dilation. Watershed transform, K means, histogram equalization & linear contrast stretching, and shape based features are accurate 72.2%, 72%, 73.7 % and 97.8% respectively.",
"title": ""
},
{
"docid": "neg:1840130_7",
"text": "Shorter product life cycles and aggressive marketing, among other factors, have increased the complexity of sales forecasting. Forecasts are often produced using a Forecasting Support System that integrates univariate statistical forecasting with managerial judgment. Forecasting sales under promotional activity is one of the main reasons to use expert judgment. Alternatively, one can replace expert adjustments by regression models whose exogenous inputs are promotion features (price, display, etc.). However, these regression models may have large dimensionality as well as multicollinearity issues. We propose a novel promotional model that overcomes these limitations. It combines Principal Component Analysis to reduce the dimensionality of the problem and automatically identifies the demand dynamics. For items with limited history, the proposed model is capable of providing promotional forecasts by selectively pooling information across established products. The performance of the model is compared against forecasts provided by experts and statistical benchmarks, on weekly data; outperforming both substantially.",
"title": ""
},
{
"docid": "neg:1840130_8",
"text": "Most Machine Learning (ML) researchers focus on automatic Machine Learning (aML) where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from the availability of “big data”. However, sometimes, for example in health informatics, we are confronted not a small number of data sets or rare events, and with complex problems where aML-approaches fail or deliver unsatisfactory results. Here, interactive Machine Learning (iML) may be of help and the “human-in-the-loop” approach may be beneficial in solving computationally hard problems, where human expertise can help to reduce an exponential search space through heuristics. In this paper, experiments are discussed which help to evaluate the effectiveness of the iML-“human-in-the-loop” approach, particularly in opening the “black box”, thereby enabling a human to directly and indirectly manipulating and interacting with an algorithm. For this purpose, we selected the Ant Colony Optimization (ACO) framework, and use it on the Traveling Salesman Problem (TSP) which is of high importance in solving many practical problems in health informatics, e.g. in the study of proteins.",
"title": ""
},
{
"docid": "neg:1840130_9",
"text": "In this paper, we argue that instead of solely focusing on developing efficient architectures to accelerate well-known low-precision CNNs, we should also seek to modify the network to suit the FPGA. We develop a fully automative toolflow which focuses on modifying the network through filter pruning, such that it efficiently utilizes the FPGA hardware whilst satisfying a predefined accuracy threshold. Although fewer weights are re-moved in comparison to traditional pruning techniques designed for software implementations, the overall model complexity and feature map storage is greatly reduced. We implement the AlexNet and TinyYolo networks on the large-scale ImageNet and PascalVOC datasets, to demonstrate up to roughly 2× speedup in frames per second and 2× reduction in resource requirements over the original network, with equal or improved accuracy.",
"title": ""
},
{
"docid": "neg:1840130_10",
"text": "Automatic generation of presentation slides for academic papers is a very challenging task. Previous methods for addressing this task are mainly based on document summarization techniques and they extract document sentences to form presentation slides, which are not well-structured and concise. In this study, we propose a phrase-based approach to generate well-structured and concise presentation slides for academic papers. Our approach first extracts phrases from the given paper, and then learns both the saliency of each phrase and the hierarchical relationship between a pair of phrases. Finally a greedy algorithm is used to select and align the salient phrases in order to form the well-structured presentation slides. Evaluation results on a real dataset verify the efficacy of our proposed approach.",
"title": ""
},
{
"docid": "neg:1840130_11",
"text": "In this work we release our extensible and easily configurable neural network training software. It provides a rich set of functional layers with a particular focus on efficient training of recurrent neural network topologies on multiple GPUs. The source of the software package is public and freely available for academic research purposes and can be used as a framework or as a standalone tool which supports a flexible configuration. The software allows to train state-of-the-art deep bidirectional long short-term memory (LSTM) models on both one dimensional data like speech or two dimensional data like handwritten text and was used to develop successful submission systems in several evaluation campaigns.",
"title": ""
},
{
"docid": "neg:1840130_12",
"text": "We performed meta-analyses on 60 neuroimaging (PET and fMRI) studies of working memory (WM), considering three types of storage material (spatial, verbal, and object), three types of executive function (continuous updating of WM, memory for temporal order, and manipulation of information in WM), and interactions between material and executive function. Analyses of material type showed the expected dorsal-ventral dissociation between spatial and nonspatial storage in the posterior cortex, but not in the frontal cortex. Some support was found for left frontal dominance in verbal WM, but only for tasks with low executive demand. Executive demand increased right lateralization in the frontal cortex for spatial WM. Tasks requiring executive processing generally produce more dorsal frontal activations than do storage-only tasks, but not all executive processes show this pattern. Brodmann's areas (BAs) 6, 8, and 9, in the superior frontal cortex, respond most when WM must be continuously updated and when memory for temporal order must be maintained. Right BAs 10 and 47, in the ventral frontal cortex, respond more frequently with demand for manipulation (including dual-task requirements or mental operations). BA 7, in the posterior parietal cortex, is involved in all types of executive function. Finally, we consider a potential fourth executive function: selective attention to features of a stimulus to be stored in WM, which leads to increased probability of activating the medial prefrontal cortex (BA 32) in storage tasks.",
"title": ""
},
{
"docid": "neg:1840130_13",
"text": "Over the last two decades there have been several process models proposed (and used) for data and information fusion. A common theme of these models is the existence of multiple levels of processing within the data fusion process. In the 1980’s three models were adopted: the intelligence cycle, the JDL model and the Boyd control. The 1990’s saw the introduction of the Dasarathy model and the Waterfall model. However, each of these models has particular advantages and disadvantages. A new model for data and information fusion is proposed. This is the Omnibus model, which draws together each of the previous models and their associated advantages whilst managing to overcome some of the disadvantages. Where possible the terminology used within the Omnibus model is aimed at a general user of data fusion technology to allow use by a distributed audience.",
"title": ""
},
{
"docid": "neg:1840130_14",
"text": "Limited work has examined how self-affirmation might lead to positive outcomes beyond the maintenance of a favorable self-image. To address this gap in the literature, we conducted two studies in two cultures to establish the benefits of self-affirmation for psychological well-being. In Study 1, South Korean participants who affirmed their values for 2 weeks showed increased eudaimonic well-being (need satisfaction, meaning, and flow) relative to control participants. In Study 2, U.S. participants performed a self-affirmation activity for 4 weeks. Extending Study 1, after 2 weeks, self-affirmation led both to increased eudaimonic well-being and hedonic well-being (affect balance). By 4 weeks, however, these effects were non-linear, and the increases in affect balance were only present for vulnerable participants-those initially low in eudaimonic well-being. In sum, the benefits of self-affirmation appear to extend beyond self-protection to include two types of well-being.",
"title": ""
},
{
"docid": "neg:1840130_15",
"text": "Switched reluctance machines (SRMs) are considered as serious candidates for starter/alternator (S/A) systems in more electric cars. Robust performance in the presence of high temperature, safe operation, offering high efficiency, and a very long constant power region, along with a rugged structure contribute to their suitability for this high impact application. To enhance these qualities, we have developed key technologies including sensorless operation over the entire speed range and closed-loop torque and speed regulation. The present paper offers an in-depth analysis of the drive dynamics during motoring and generating modes of operation. These findings will be used to explain our control strategies in the context of the S/A application. Experimental and simulation results are also demonstrated to validate the practicality of our claims.",
"title": ""
},
{
"docid": "neg:1840130_16",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
},
{
"docid": "neg:1840130_17",
"text": "With modern smart phones and powerful mobile devices, Mobile apps provide many advantages to the community but it has also grown the demand for online availability and accessibility. Cloud computing is provided to be widely adopted for several applications in mobile devices. However, there are many advantages and disadvantages of using mobile applications and cloud computing. This paper focuses in providing an overview of mobile cloud computing advantages, disadvantages. The paper discusses the importance of mobile cloud applications and highlights the mobile cloud computing open challenges",
"title": ""
},
{
"docid": "neg:1840130_18",
"text": "The rapid increase in multimedia data transmission over the Internet necessitates the multi-modal summarization (MMS) from collections of text, image, audio and video. In this work, we propose an extractive multi-modal summarization method that can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic. The key idea is to bridge the semantic gaps between multi-modal content. For audio information, we design an approach to selectively use its transcription. For visual information, we learn the joint representations of text and images using a neural network. Finally, all of the multimodal aspects are considered to generate the textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce an MMS corpus in English and Chinese, which is released to the public1. The experimental results obtained on this dataset demonstrate that our method outperforms other competitive baseline methods.",
"title": ""
},
{
"docid": "neg:1840130_19",
"text": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification.",
"title": ""
}
] |
1840131 | Automatic repair of buggy if conditions and missing preconditions with SMT | [
{
"docid": "pos:1840131_0",
"text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.",
"title": ""
},
{
"docid": "pos:1840131_1",
"text": "Patch generation is an essential software maintenance task because most software systems inevitably have bugs that need to be fixed. Unfortunately, human resources are often insufficient to fix all reported and known bugs. To address this issue, several automated patch generation techniques have been proposed. In particular, a genetic-programming-based patch generation technique, GenProg, proposed by Weimer et al., has shown promising results. However, these techniques can generate nonsensical patches due to the randomness of their mutation operations. To address this limitation, we propose a novel patch generation approach, Pattern-based Automatic program Repair (PAR), using fix patterns learned from existing human-written patches. We manually inspected more than 60,000 human-written patches and found there are several common fix patterns. Our approach leverages these fix patterns to generate program patches automatically. We experimentally evaluated PAR on 119 real bugs. In addition, a user study involving 89 students and 164 developers confirmed that patches generated by our approach are more acceptable than those generated by GenProg. PAR successfully generated patches for 27 out of 119 bugs, while GenProg was successful for only 16 bugs.",
"title": ""
},
{
"docid": "pos:1840131_2",
"text": "Debugging consumes significant time and effort in any major software development project. Moreover, even after the root cause of a bug is identified, fixing the bug is non-trivial. Given this situation, automated program repair methods are of value. In this paper, we present an automated repair method based on symbolic execution, constraint solving and program synthesis. In our approach, the requirement on the repaired code to pass a given set of tests is formulated as a constraint. Such a constraint is then solved by iterating over a layered space of repair expressions, layered by the complexity of the repair code. We compare our method with recently proposed genetic programming based repair on SIR programs with seeded bugs, as well as fragments of GNU Coreutils with real bugs. On these subjects, our approach reports a higher success-rate than genetic programming based repair, and produces a repair faster.",
"title": ""
}
] | [
{
"docid": "neg:1840131_0",
"text": "The fast simulation of large networks of spiking neurons is a major task for the examination of biology-inspired vision systems. Networks of this type label features by synchronization of spikes and there is strong demand to simulate these e,ects in real world environments. As the calculations for one model neuron are complex, the digital simulation of large networks is not e>cient using existing simulation systems. Consequently, it is necessary to develop special simulation techniques. This article introduces a wide range of concepts for the di,erent parts of digital simulator systems for large vision networks and presents accelerators based on these foundations. c © 2002 Elsevier Science B.V. All rights",
"title": ""
},
{
"docid": "neg:1840131_1",
"text": "A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.",
"title": ""
},
{
"docid": "neg:1840131_2",
"text": "Communication technologies are becoming increasingly diverse in form and functionality, making it important to identify which aspects of these technologies actually improve geographically distributed communication. Our study examines two potentially important aspects of communication technologies which appear in robot-mediated communication - physical embodiment and control of this embodiment. We studied the impact of physical embodiment and control upon interpersonal trust in a controlled laboratory experiment using three different videoconferencing settings: (1) a handheld tablet controlled by a local user, (2) an embodied system controlled by a local user, and (3) an embodied system controlled by a remote user (n = 29 dyads). We found that physical embodiment and control by the local user increased the amount of trust built between partners. These results suggest that both physical embodiment and control of the system influence interpersonal trust in mediated communication and have implications for future system designs.",
"title": ""
},
{
"docid": "neg:1840131_3",
"text": "In this paper we investigate deceptive defense strategies for web servers. Web servers are widely exploited resources in the modern cyber threat landscape. Often these servers are exposed in the Internet and accessible for a broad range of valid as well as malicious users. Common security strategies like firewalls are not sufficient to protect web servers. Deception based Information Security enables a large set of counter measures to decrease the efficiency of intrusions. In this work we depict several techniques out of the reconnaissance process of an attacker. We match these with deceptive counter measures. All proposed measures are implemented in an experimental web server with deceptive counter measure abilities. We also conducted an experiment with honeytokens and evaluated delay strategies against automated scanner tools.",
"title": ""
},
{
"docid": "neg:1840131_4",
"text": "Pathology reports are a primary source of information for cancer registries which process high volumes of free-text reports annually. Information extraction and coding is a manual, labor-intensive process. In this study, we investigated deep learning and a convolutional neural network (CNN), for extracting ICD-O-3 topographic codes from a corpus of breast and lung cancer pathology reports. We performed two experiments, using a CNN and a more conventional term frequency vector approach, to assess the effects of class prevalence and inter-class transfer learning. The experiments were based on a set of 942 pathology reports with human expert annotations as the gold standard. CNN performance was compared against a more conventional term frequency vector space approach. We observed that the deep learning models consistently outperformed the conventional approaches in the class prevalence experiment, resulting in micro- and macro-F score increases of up to 0.132 and 0.226, respectively, when class labels were well populated. Specifically, the best performing CNN achieved a micro-F score of 0.722 over 12 ICD-O-3 topography codes. Transfer learning provided a consistent but modest performance boost for the deep learning methods but trends were contingent on the CNN method and cancer site. These encouraging results demonstrate the potential of deep learning for automated abstraction of pathology reports.",
"title": ""
},
{
"docid": "neg:1840131_5",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
},
{
"docid": "neg:1840131_6",
"text": "Nonnegative matrix factorization (NMF) has become a widely used tool for the analysis of high-dimensional data as it automatically extracts sparse and meaningful features from a set of nonnegative data vectors. We first illustrate this property of NMF on three applications, in image processing, text mining and hyperspectral imaging –this is the why. Then we address the problem of solving NMF, which is NP-hard in general. We review some standard NMF algorithms, and also present a recent subclass of NMF problems, referred to as near-separable NMF, that can be solved efficiently (that is, in polynomial time), even in the presence of noise –this is the how. Finally, we briefly describe some problems in mathematics and computer science closely related to NMF via the nonnegative rank.",
"title": ""
},
{
"docid": "neg:1840131_7",
"text": "This paper deals with identifying the genre of a movie by analyzing just the visual features of its trailer. This task seems to be very trivial for a human; our endeavor is to create a vision system that can do the same, accurately. We discuss the approaches we take and our experimental observations. The contributions of this work are : (1) we propose a neural network (based on VGG) that can classify movie trailers based on their genres; (2) we release a curated dataset, called YouTube-Trailer Dataset, which has over 800 movie trailers spanning over 4 genres. We achieve an accuracy of 80.1% with the spatial features, and 85% with using LSTM and set these results as the benchmark for this dataset. We have made the source code publicly available.1",
"title": ""
},
{
"docid": "neg:1840131_8",
"text": "Mallomonas eoa TAKAHASHII was first described by TAKAHASHII, who found the alga in ditches at Tsuruoka Parc, North-East Japan (TAKAHASHII 1960, 1963, ASMUND & TAKAHASHII 1969) . He studied the alga by transmission electron microscopy and described its different kinds of scales . However, he did not report the presence of any cysts . In the spring of 1971 a massive development of Mallomonas occurred under the ice in Lake Trummen, central South Sweden . Scanning electron microscopy revealed that the predominant species consisted of Mallomonas eoa TAKAHASHII, which occurred together with Synura petersenii KoRSHIKOV . In April the cells of Mallomonas eoa developed cysts and were studied by light microscopy and scanning electron microscopy. In contrast with earlier techniques the scanning electron microscopy made it possible to study the structure of the scales in various parts of the cell and to relate the cysts to the cells . Such knowledge is of importance also for paleolimnological research . Data on the quantitative and qualitative findings are reported below .",
"title": ""
},
{
"docid": "neg:1840131_9",
"text": "Voice control has emerged as a popular method for interacting with smart-devices such as smartphones, smartwatches etc. Popular voice control applications like Siri and Google Now are already used by a large number of smartphone and tablet users. A major challenge in designing a voice control application is that it requires continuous monitoring of user?s voice input through the microphone. Such applications utilize hotwords such as \"Okay Google\" or \"Hi Galaxy\" allowing them to distinguish user?s voice command and her other conversations. A voice control application has to continuously listen for hotwords which significantly increases the energy consumption of the smart-devices.\n To address this energy efficiency problem of voice control, we present AccelWord in this paper. AccelWord is based on the empirical evidence that accelerometer sensors found in today?s mobile devices are sensitive to user?s voice. We also demonstrate that the effect of user?s voice on accelerometer data is rich enough so that it can be used to detect the hotwords spoken by the user. To achieve the goal of low energy cost but high detection accuracy, we combat multiple challenges, e.g. how to extract unique signatures of user?s speaking hotwords only from accelerometer data and how to reduce the interference caused by user?s mobility.\n We finally implement AccelWord as a standalone application running on Android devices. Comprehensive tests show AccelWord has hotword detection accuracy of 85% in static scenarios and 80% in mobile scenarios. Compared to the microphone based hotword detection applications such as Google Now and Samsung S Voice, AccelWord is 2 times more energy efficient while achieving the accuracy of 98% and 92% in static and mobile scenarios respectively.",
"title": ""
},
{
"docid": "neg:1840131_10",
"text": "Colloidal particles play an important role in various areas of material and pharmaceutical sciences, biotechnology, and biomedicine. In this overview we describe micro- and nano-particles used for the preparation of polyelectrolyte multilayer capsules and as drug delivery vehicles. An essential feature of polyelectrolyte multilayer capsule preparations is the ability to adsorb polymeric layers onto colloidal particles or templates followed by dissolution of these templates. The choice of the template is determined by various physico-chemical conditions: solvent needed for dissolution, porosity, aggregation tendency, as well as release of materials from capsules. Historically, the first templates were based on melamine formaldehyde, later evolving towards more elaborate materials such as silica and calcium carbonate. Their advantages and disadvantages are discussed here in comparison to non-particulate templates such as red blood cells. Further steps in this area include development of anisotropic particles, which themselves can serve as delivery carriers. We provide insights into application of particles as drug delivery carriers in comparison to microcapsules templated on them.",
"title": ""
},
{
"docid": "neg:1840131_11",
"text": "Determining one's own position by means of a smartphone is an important issue for various applications in the fields of personal navigation or location-based services. Places like large airports, shopping malls or extensive underground parking lots require personal navigation but satellite signals and GPS connection cannot be obtained. Thus, alternative or complementary systems are needed. In this paper a system concept to integrate a foot-mounted inertial measurement unit (IMU) with an Android smartphone is presented. We developed a prototype to demonstrate and evaluate the implementation of pedestrian strapdown navigation on a smartphone. In addition to many other approaches we also fuse height measurements from a barometric sensor in order to stabilize height estimation over time. A very low-cost single-chip IMU is used to demonstrate applicability of the outlined system concept for potential commercial applications. In an experimental study we compare the achievable accuracy with a commercially available IMU. The evaluation shows very competitive results on the order of a few percent of traveled distance. Comparing performance, cost and size of the presented IMU the outlined approach carries an enormous potential in the field of indoor pedestrian navigation.",
"title": ""
},
{
"docid": "neg:1840131_12",
"text": "NASA Glenn Research Center, in collaboration with the aerospace industry and academia, has begun the development of technology for a future hybrid-wing body electric airplane with a turboelectric distributed propulsion (TeDP) system. It is essential to design a subscale system to emulate the TeDP power grid, which would enable rapid analysis and demonstration of the proof-of-concept of the TeDP electrical system. This paper describes how small electrical machines with their controllers can emulate all the components in a TeDP power train. The whole system model in Matlab/Simulink was first developed and tested in simulation, and the simulation results showed that system dynamic characteristics could be implemented by using the closed-loop control of the electric motor drive systems. Then we designed a subscale experimental system to emulate the entire power system from the turbine engine to the propulsive fans. Firstly, we built a system to emulate a gas turbine engine driving a generator, consisting of two permanent magnet (PM) motors with brushless motor drives, coupled by a shaft. We programmed the first motor and its drive to mimic the speed-torque characteristic of the gas turbine engine, while the second motor and drive act as a generator and produce a torque load on the first motor. Secondly, we built another system of two PM motors and drives to emulate a motor driving a propulsive fan. We programmed the first motor and drive to emulate a wound-rotor synchronous motor. The propulsive fan was emulated by implementing fan maps and flight conditions into the fourth motor and drive, which produce a torque load on the driving motor. The stator of each PM motor is designed to travel axially to change the coupling between rotor and stator. This feature allows the PM motor to more closely emulate a wound-rotor synchronous machine. These techniques can convert the plain motor system into a unique TeDP power grid emulator that enables real-time simulation performance using hardware-in-the-loop (HIL).",
"title": ""
},
{
"docid": "neg:1840131_13",
"text": "Financial engineering such as trading decision is an emerging research area and also has great commercial potentials. A successful stock buying/selling generally occurs near price trend turning point. Traditional technical analysis relies on some statistics (i.e. technical indicators) to predict turning point of the trend. However, these indicators can not guarantee the accuracy of prediction in chaotic domain. In this paper, we propose an intelligent financial trading system through a new approach: learn trading strategy by probabilistic model from high-level representation of time series – turning points and technical indicators. The main contributions of this paper are two-fold. First, we utilize high-level representation (turning point and technical indicators). High-level representation has several advantages such as insensitive to noise and intuitive to human being. However, it is rarely used in past research. Technical indicator is the knowledge from professional investors, which can generally characterize the market. Second, by combining high-level representation with probabilistic model, the randomness and uncertainty of chaotic system is further reduced. In this way, we achieve great results (comprehensive experiments on S&P500 components) in a chaotic domain in which the prediction is thought impossible in the past. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840131_14",
"text": "Imaging radars incorporating digital beamforming (DBF) typically require a uniform linear antenna array (ULA). However, using a large number of parallel receivers increases system complexity and cost. A switched antenna array can provide a similar performance at a lower expense. This paper describes an active switched antenna array with 32 integrated planar patch antennas illuminating a cylindrical lens. The array can be operated over a frequency range from 73 GHz–81 GHz. Together with a broadband FMCW frontend (Frequency Modulated Continuous Wave) a DBF radar was implemented. The design of the array is presented together with measurement results.",
"title": ""
},
{
"docid": "neg:1840131_15",
"text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.",
"title": ""
},
{
"docid": "neg:1840131_16",
"text": "Self -- Management systems are the main objective of Autonomic Computing (AC), and it is needed to increase the running system's reliability, stability, and performance. This field needs to investigate some issues related to complex systems such as, self-awareness system, when and where an error state occurs, knowledge for system stabilization, analyze the problem, healing plan with different solutions for adaptation without the need for human intervention. This paper focuses on self-healing which is the most important component of Autonomic Computing. Self-healing is a technique that aims to detect, analyze, and repair existing faults within the system. All of these phases are accomplished in real-time system. In this approach, the system is capable of performing a reconfiguration action in order to recover from a permanent fault. Moreover, self-healing system should have the ability to modify its own behavior in response to changes within the environment. Recursive neural network has been proposed and used to solve the main challenges of self-healing, such as monitoring, interpretation, resolution, and adaptation.",
"title": ""
},
{
"docid": "neg:1840131_17",
"text": "Computer models are widely used to simulate real processes. Within the computer model, there always exist some parameters which are unobservable in the real process but need to be specified in the computer model. The procedure to adjust these unknown parameters in order to fit the model to observed data and improve its predictive capability is known as calibration. In traditional calibration, once the optimal calibration parameter set is obtained, it is treated as known for future prediction. Calibration parameter uncertainty introduced from estimation is not accounted for. We will present a Bayesian calibration approach for stochastic computer models. We account for these additional uncertainties and derive the predictive distribution for the real process. Two numerical examples are used to illustrate the accuracy of the proposed method.",
"title": ""
},
{
"docid": "neg:1840131_18",
"text": "Pathfinding for a single agent is the problem of planning a route from an initial location to a goal location in an environment, going around obstacles. Pathfinding for multiple agents also aims to plan such routes for each agent, subject to different constraints, such as restrictions on the length of each path or on the total length of paths, no self-intersecting paths, no intersection of paths/plans, no crossing/meeting each other. It also has variations for finding optimal solutions, e.g., with respect to the maximum path length, or the sum of plan lengths. These problems are important for many real-life applications, such as motion planning, vehicle routing, environmental monitoring, patrolling, computer games. Motivated by such applications, we introduce a formal framework that is general enough to address all these problems: we use the expressive high-level representation formalism and efficient solvers of the declarative programming paradigm Answer Set Programming. We also introduce heuristics to improve the computational efficiency and/or solution quality. We show the applicability and usefulness of our framework by experiments, with randomly generated problem instances on a grid, on a real-world road network, and on a real computer game terrain.",
"title": ""
}
] |
1840132 | 6-Year follow-up of ventral monosegmental spondylodesis of incomplete burst fractures of the thoracolumbar spine using three cortical iliac crest bone grafts | [
{
"docid": "pos:1840132_0",
"text": "In view of the current level of knowledge and the numerous treatment possibilities, none of the existing classification systems of thoracic and lumbar injuries is completely satisfactory. As a result of more than a decade of consideration of the subject matter and a review of 1445 consecutive thoracolumbar injuries, a comprehensive classification of thoracic and lumbar injuries is proposed. The classification is primarily based on pathomorphological criteria. Categories are established according to the main mechanism of injury, pathomorphological uniformity, and in consideration of prognostic aspects regarding healing potential. The classification reflects a progressive scale of morphological damage by which the degree of instability is determined. The severity of the injury in terms of instability is expressed by its ranking within the classification system. A simple grid, the 3-3-3 scheme of the AO fracture classification, was used in grouping the injuries. This grid consists of three types: A, B, and C. Every type has three groups, each of which contains three subgroups with specifications. The types have a fundamental injury pattern which is determined by the three most important mechanisms acting on the spine: compression, distraction, and axial torque. Type A (vertebral body compression) focuses on injury patterns of the vertebral body. Type B injuries (anterior and posterior element injuries with distraction) are characterized by transverse disruption either anteriorly or posteriorly. Type C lesions (anterior and posterior element injuries with rotation) describe injury patterns resulting from axial torque. The latter are most often superimposed on either type A or type B lesions. Morphological criteria are predominantly used for further subdivision of the injuries. Severity progresses from type A through type C as well as within the types, groups, and further subdivisions. The 1445 cases were analyzed with regard to the level of the main injury, the frequency of types and groups, and the incidence of neurological deficit. Most injuries occurred around the thoracolumbar junction. The upper and lower end of the thoracolumbar spine and the T 10 level were most infrequently injured. Type A fractures were found in 66.1 %, type B in 14.5%, and type C in 19.4% of the cases. Stable type Al fractures accounted for 34.7% of the total. Some injury patterns are typical for certain sections of the thoracolumbar spine and others for age groups. The neurological deficit, ranging from complete paraplegia to a single root lesion, was evaluated in 1212 cases. The overall incidence was 22% and increased significantly from type to type: neurological deficit was present in 14% of type A, 32% of type B, and 55% of type C lesions. Only 2% of the Al and 4% of the A2 fractures showed any neurological deficit. The classification is comprehensive as almost any injury can be itemized according to easily recognizable and consistent radiographic and clinical findings. Every injury can be defined alphanumerically or by a descriptive name. The classification can, however, also be used in an abbreviated form without impairment of the information most important for clinical practice. Identification of the fundamental nature of an injury is facilitated by a simple algorithm. Recognizing the nature of the injury, its degree of instability, and prognostic aspects are decisive for the choice of the most appropriate treatment. Experience has shown that the new classification is especially useful in this respect.",
"title": ""
}
] | [
{
"docid": "neg:1840132_0",
"text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.",
"title": ""
},
{
"docid": "neg:1840132_1",
"text": "Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.",
"title": ""
},
{
"docid": "neg:1840132_2",
"text": "Tuning a pre-trained network is commonly thought to improve data efficiency. However, Kaiming He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. Through extensive experiments on label corruption, class imbalance, adversarial examples, out-of-distribution detection, and confidence calibration, we demonstrate large gains from pre-training and complementary effects with task-specific methods. We show approximately a 30% relative improvement in label noise robustness and a 10% absolute improvement in adversarial robustness on CIFAR10 and CIFAR-100. In some cases, using pretraining without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.",
"title": ""
},
{
"docid": "neg:1840132_3",
"text": "BACKGROUND\nRecently, human germinal center-associated lymphoma (HGAL) gene protein has been proposed as an adjunctive follicular marker to CD10 and BCL6.\n\n\nMETHODS\nOur aim was to evaluate immunoreactivity for HGAL in 82 cases of follicular lymphomas (FLs)--67 nodal, 5 cutaneous and 10 transformed--which were all analysed histologically, by immunohistochemistry and PCR.\n\n\nRESULTS\nImmunostaining for HGAL was more frequently positive (97.6%) than that for BCL6 (92.7%) and CD10 (90.2%) in FLs; the cases negative for bcl6 and/or for CD10 were all positive for HGAL, whereas the two cases negative for HGAL were positive with BCL6; no difference in HGAL immunostaining was found among different malignant subtypes or grades.\n\n\nCONCLUSIONS\nTherefore, HGAL can be used in the immunostaining of FLs as the most sensitive germinal center (GC)-marker; when applied alone, it would half the immunostaining costs, reserving the use of the other two markers only to HGAL-negative cases.",
"title": ""
},
{
"docid": "neg:1840132_4",
"text": "We demonstrate an end-to-end question answering system that integrates BERT with the open-source Anserini information retrieval toolkit. In contrast to most question answering and reading comprehension models today, which operate over small amounts of input text, our system integrates best practices from IR with a BERT-based reader to identify answers from a large corpus of Wikipedia articles in an end-to-end fashion. We report large improvements over previous results on a standard benchmark test collection, showing that fine-tuning pretrained BERT with SQuAD is sufficient to achieve high accuracy in identifying answer spans.",
"title": ""
},
{
"docid": "neg:1840132_5",
"text": "Previous work has used monolingual parallel corpora to extract and generate paraphrases. We show that this task can be done using bilingual parallel corpora, a much more commonly available resource. Using alignment techniques from phrasebased statistical machine translation, we show how paraphrases in one language can be identified using a phrase in another language as a pivot. We define a paraphrase probability that allows paraphrases extracted from a bilingual parallel corpus to be ranked using translation probabilities, and show how it can be refined to take contextual information into account. We evaluate our paraphrase extraction and ranking methods using a set of manual word alignments, and contrast the quality with paraphrases extracted from automatic alignments.",
"title": ""
},
{
"docid": "neg:1840132_6",
"text": "Many-objective (four or more objectives) optimization problems pose a great challenge to the classical Pareto-dominance based multi-objective evolutionary algorithms (MOEAs), such as NSGA-II and SPEA2. This is mainly due to the fact that the selection pressure based on Pareto-dominance degrades severely with the number of objectives increasing. Very recently, a reference-point based NSGA-II, referred as NSGA-III, is suggested to deal with many-objective problems, where the maintenance of diversity among population members is aided by supplying and adaptively updating a number of well-spread reference points. However, NSGA-III still relies on Pareto-dominance to push the population towards Pareto front (PF), leaving room for the improvement of its convergence ability. In this paper, an improved NSGA-III procedure, called θ-NSGA-III, is proposed, aiming to better tradeoff the convergence and diversity in many-objective optimization. In θ-NSGA-III, the non-dominated sorting scheme based on the proposed θ-dominance is employed to rank solutions in the environmental selection phase, which ensures both convergence and diversity. Computational experiments have shown that θ-NSGA-III is significantly better than the original NSGA-III and MOEA/D on most instances no matter in convergence and overall performance.",
"title": ""
},
{
"docid": "neg:1840132_7",
"text": "Structures of biological macromolecules determined by transmission cryoelectron microscopy (cryo-TEM) and three-dimensional image reconstruction are often displayed as surface-shaded representations with depth cueing along the viewed direction (Z cueing). Depth cueing to indicate distance from the center of virus particles (radial-depth cueing, or R cueing) has also been used. We have found that a style of R cueing in which color is applied in smooth or discontinuous gradients using the IRIS Explorer software is an informative technique for displaying the structures of virus particles solved by cryo-TEM and image reconstruction. To develop and test these methods, we used existing cryo-TEM reconstructions of mammalian reovirus particles. The newly applied visualization techniques allowed us to discern several new structural features, including sites in the inner capsid through which the viral mRNAs may be extruded after they are synthesized by the reovirus transcriptase complexes. To demonstrate the broad utility of the methods, we also applied them to cryo-TEM reconstructions of human rhinovirus, native and swollen forms of cowpea chlorotic mottle virus, truncated core of pyruvate dehydrogenase complex from Saccharomyces cerevisiae, and flagellar filament of Salmonella typhimurium. We conclude that R cueing with color gradients is a useful tool for displaying virus particles and other macromolecules analyzed by cryo-TEM and image reconstruction.",
"title": ""
},
{
"docid": "neg:1840132_8",
"text": "Recognition of the mode of motion or mode of transit of the user or platform carrying a device is needed in portable navigation, as well as other technological domains. An extensive survey on motion mode recognition approaches is provided in this survey paper. The survey compares and describes motion mode recognition approaches from different viewpoints: usability and convenience, types of devices in terms of setup mounting and data acquisition, various types of sensors used, signal processing methods employed, features extracted, and classification techniques. This paper ends with a quantitative comparison of the performance of motion mode recognition modules developed by researchers in different domains.",
"title": ""
},
{
"docid": "neg:1840132_9",
"text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.",
"title": ""
},
{
"docid": "neg:1840132_10",
"text": "We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient direction (Liu & Wang, 2016) that maximally decreases the KL divergence with the target distribution. Our method works for any target distribution specified by their unnormalized density function, and can train any black-box architectures that are differentiable in terms of the parameters we want to adapt. We demonstrate our method with a number of applications, including variational autoencoder (VAE) with expressive encoders to model complex latent space structures, and hyper-parameter learning of MCMC samplers that allows Bayesian inference to adaptively improve itself when seeing more data.",
"title": ""
},
{
"docid": "neg:1840132_11",
"text": "In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels. We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems. Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure. Both single-user and multiuser transmission are examined. Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.",
"title": ""
},
{
"docid": "neg:1840132_12",
"text": "Sweeping has become the workhorse algorithm for cre ating conforming hexahedral meshes of complex model s. This paper describes progress on the automatic, robust generat ion of MultiSwept meshes in CUBIT. MultiSweeping ex t nds the class of volumes that may be swept to include those with mul tiple source and multiple target surfaces. While no t yet perfect, CUBIT’s MultiSweeping has recently become more reliable, an d been extended to assemblies of volumes. Sweep For ging automates the process of making a volume (multi) sweepable: Sweep V rification takes the given source and target sur faces, and automatically classifies curve and vertex types so that sweep lay ers are well formed and progress from sources to ta rge s.",
"title": ""
},
{
"docid": "neg:1840132_13",
"text": "The emergence of GUI is a great progress in the history of computer science and software design. GUI makes human computer interaction more simple and interesting. Python, as a popular programming language in recent years, has not been realized in GUI design. Tkinter has the advantage of native support for Python, but there are too few visual GUI generators supporting Tkinter. This article presents a GUI generator based on Tkinter framework, PyDraw. The design principle of PyDraw and the powerful design concept behind it are introduced in detail. With PyDraw's GUI design philosophy, it can easily design a visual GUI rendering generator for any GUI framework with canvas functionality or programming language with screen display control. This article is committed to conveying PyDraw's GUI free design concept. Through experiments, we have proved the practicability and efficiency of PyDrawd. In order to better convey the design concept of PyDraw, let more enthusiasts join PyDraw update and evolution, we have the source code of PyDraw. At the end of the article, we summarize our experience and express our vision for future GUI design. We believe that the future GUI will play an important role in graphical software programming, the future of less code or even no code programming software design methods must become a focus and hot, free, like drawing GUI will be worth pursuing.",
"title": ""
},
{
"docid": "neg:1840132_14",
"text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.",
"title": ""
},
{
"docid": "neg:1840132_15",
"text": "India contributes about 70% of malaria in the South East Asian Region of WHO. Although annually India reports about two million cases and 1000 deaths attributable to malaria, there is an increasing trend in the proportion of Plasmodium falciparum as the agent. There exists heterogeneity and variability in the risk of malaria transmission between and within the states of the country as many ecotypes/paradigms of malaria have been recognized. The pattern of clinical presentation of severe malaria has also changed and while multi-organ failure is more frequently observed in falciparum malaria, there are reports of vivax malaria presenting with severe manifestations. The high burden populations are ethnic tribes living in the forested pockets of the states like Orissa, Jharkhand, Madhya Pradesh, Chhattisgarh and the North Eastern states which contribute bulk of morbidity and mortality due to malaria in the country. Drug resistance, insecticide resistance, lack of knowledge of actual disease burden along with new paradigms of malaria pose a challenge for malaria control in the country. Considering the existing gaps in reported and estimated morbidity and mortality, need for estimation of true burden of malaria has been stressed. Administrative, financial, technical and operational challenges faced by the national programme have been elucidated. Approaches and priorities that may be helpful in tackling serious issues confronting malaria programme have been outlined.",
"title": ""
},
{
"docid": "neg:1840132_16",
"text": "One of the key challenges in applying reinforcement learning to complex robotic control tasks is the need to gather large amounts of experience in order to find an effective policy for the task at hand. Model-based reinforcement learning can achieve good sample efficiency, but requires the ability to learn a model of the dynamics that is good enough to learn an effective policy. In this work, we develop a model-based reinforcement learning algorithm that combines prior knowledge from previous tasks with online adaptation of the dynamics model. These two ingredients enable highly sample-efficient learning even in regimes where estimating the true dynamics is very difficult, since the online model adaptation allows the method to locally compensate for unmodeled variation in the dynamics. We encode the prior experience into a neural network dynamics model, adapt it online by progressively refitting a local linear model of the dynamics, and use model predictive control to plan under these dynamics. Our experimental results show that this approach can be used to solve a variety of complex robotic manipulation tasks in just a single attempt, using prior data from other manipulation behaviors.",
"title": ""
},
{
"docid": "neg:1840132_17",
"text": "A variety of applications employ ensemble learning models, using a collection of decision trees, to quickly and accurately classify an input based on its vector of features. In this paper, we discuss the implementation of such a method, namely Random Forests, as the first machine learning algorithm to be executed on the Automata Processor (AP). The AP is an upcoming reconfigurable co-processor accelerator which supports the execution of numerous automata in parallel against a single input data-flow. Owing to this execution model, our approach is fundamentally di↵erent, translating Random Forest models from existing memory-bound tree-traversal algorithms to pipelined designs that use multiple automata to check all of the required thresholds independently and in parallel. We also describe techniques to handle floatingpoint feature values which are not supported in the native hardware, pipelining of the execution stages, and compression of automata for the fastest execution times. The net result is a solution which when evaluated using two applications, namely handwritten digit recognition and sentiment analysis, produce up to 63 and 93 times speed-up respectively over single-core state-of-the-art CPU-based solutions. We foresee these algorithmic techniques to be useful not only in the acceleration of other applications employing Random Forests, but also in the implementation of other machine learning methods on this novel architecture.",
"title": ""
},
{
"docid": "neg:1840132_18",
"text": "\"Big Data\" as a term has been among the biggest trends of the last three years, leading to an upsurge of research, as well as industry and government applications. Data is deemed a powerful raw material that can impact multidisciplinary research endeavors as well as government and business performance. The goal of this discussion paper is to share the data analytics opinions and perspectives of the authors relating to the new opportunities and challenges brought forth by the big data movement. The authors bring together diverse perspectives, coming from different geographical locations with different core research expertise and different affiliations and work experiences. The aim of this paper is to evoke discussion rather than to provide a comprehensive survey of big data research.",
"title": ""
},
{
"docid": "neg:1840132_19",
"text": "The significant growth of the Internet of Things (IoT) is revolutionizing the way people live by transforming everyday Internet-enabled objects into an interconnected ecosystem of digital and personal information accessible anytime and anywhere. As more objects become Internet-enabled, the security and privacy of the personal information generated, processed and stored by IoT devices become complex and challenging to manage. This paper details the current security and privacy challenges presented by the increasing use of the IoT. Furthermore, investigate and analyze the limitations of the existing solutions with regard to addressing security and privacy challenges in IoT and propose a possible solution to address these challenges. The results of this proposed solution could be implemented during the IoT design, building, testing and deployment phases in the real-life environments to minimize the security and privacy challenges associated with IoT.",
"title": ""
}
] |
1840133 | Nested Mini-Batch K-Means | [
{
"docid": "pos:1840133_0",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "pos:1840133_1",
"text": "Sparse coding---that is, modelling data vectors as sparse linear combinations of basis elements---is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on learning the basis set, also called dictionary, to adapt it to specific data, an approach that has recently proven to be very effective for signal reconstruction and classification in the audio and image processing domains. This paper proposes a new online optimization algorithm for dictionary learning, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples. A proof of convergence is presented, along with experiments with natural images demonstrating that it leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.",
"title": ""
}
] | [
{
"docid": "neg:1840133_0",
"text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.",
"title": ""
},
{
"docid": "neg:1840133_1",
"text": "A relatively simple model of the phonological loop (A. D. Baddeley, 1986), a component of working memory, has proved capable of accommodating a great deal of experimental evidence from normal adult participants, children, and neuropsychological patients. Until recently, however, the role of this subsystem in everyday cognitive activities was unclear. In this article the authors review studies of word learning by normal adults and children, neuropsychological patients, and special developmental populations, which provide evidence that the phonological loop plays a crucial role in learning the novel phonological forms of new words. The authors propose that the primary purpose for which the phonological loop evolved is to store unfamiliar sound patterns while more permanent memory records are being constructed. Its use in retaining sequences of familiar words is, it is argued, secondary.",
"title": ""
},
{
"docid": "neg:1840133_2",
"text": "Social media use continues to grow and is especially prevalent among young adults. It is surprising then that, in spite of this enhanced interconnectivity, young adults may be lonelier than other age groups, and that the current generation may be the loneliest ever. We propose that only image-based platforms (e.g., Instagram, Snapchat) have the potential to ameliorate loneliness due to the enhanced intimacy they offer. In contrast, text-based platforms (e.g., Twitter, Yik Yak) offer little intimacy and should have no effect on loneliness. This study (N 1⁄4 253) uses a mixed-design survey to test this possibility. Quantitative results suggest that loneliness may decrease, while happiness and satisfaction with life may increase, as a function of image-based social media use. In contrast, text-based media use appears ineffectual. Qualitative results suggest that the observed effects may be due to the enhanced intimacy offered by imagebased (versus text-based) social media use. © 2016 Published by Elsevier Ltd. “The more advanced the technology, on the whole, the more possible it is for a considerable number of human beings to imagine being somebody else.” -sociologist David Riesman.",
"title": ""
},
{
"docid": "neg:1840133_3",
"text": "Object class detection has been a synonym for 2D bounding box localization for the longest time, fueled by the success of powerful statistical learning techniques, combined with robust image representations. Only recently, there has been a growing interest in revisiting the promise of computer vision from the early days: to precisely delineate the contents of a visual scene, object by object, in 3D. In this paper, we draw from recent advances in object detection and 2D-3D object lifting in order to design an object class detector that is particularly tailored towards 3D object class detection. Our 3D object class detection method consists of several stages gradually enriching the object detection output with object viewpoint, keypoints and 3D shape estimates. Following careful design, in each stage it constantly improves the performance and achieves state-of-the-art performance in simultaneous 2D bounding box and viewpoint estimation on the challenging Pascal3D+ [50] dataset.",
"title": ""
},
{
"docid": "neg:1840133_4",
"text": "Smothering is defined as an obstruction of the air passages above the level of the epiglottis, including the nose, mouth, and pharynx. This is in contrast to choking, which is considered to be due to an obstruction of the air passages below the epiglottis. The manner of death in smothering can be homicidal, suicidal, or an accident. Accidental smothering is considered to be a rare event among middle-aged adults, yet many cases still occur. Presented here is the case of a 39-year-old woman with a history of bipolar disease who was found dead on her living room floor by her neighbors. Her hands were covered in scratches and her pet cat was found disemboweled in the kitchen with its tail hacked off. On autopsy her stomach was found to be full of cat intestines, adipose tissue, and strips of fur-covered skin. An intact left kidney and adipose tissue were found lodged in her throat just above her epiglottis. After a complete investigation, the cause of death was determined to be asphyxia by smothering due to animal tissue.",
"title": ""
},
{
"docid": "neg:1840133_5",
"text": "Tasks in visual analytics differ from typical information retrieval tasks in fundamental ways. A critical part of a visual analytics is to ask the right questions when dealing with a diverse collection of information. In this article, we introduce the design and application of an integrated exploratory visualization system called Storylines. Storylines provides a framework to enable analysts visually and systematically explore and study a body of unstructured text without prior knowledge of its thematic structure. The system innovatively integrates latent semantic indexing, natural language processing, and social network analysis. The contributions of the work include providing an intuitive and directly accessible representation of a latent semantic space derived from the text corpus, an integrated process for identifying salient lines of stories, and coordinated visualizations across a spectrum of perspectives in terms of people, locations, and events involved in each story line. The system is tested with the 2006 VAST contest data, in particular, the portion of news articles.",
"title": ""
},
{
"docid": "neg:1840133_6",
"text": "Vast amount of information is available on web. Data analysis applications such as extracting mutual funds information from a website, daily extracting opening and closing price of stock from a web page involves web data extraction. Huge efforts are made by lots of researchers to automate the process of web data scraping. Lots of techniques depends on the structure of web page i.e. html structure or DOM tree structure to scrap data from web page. In this paper we are presenting survey of HTML aware web scrapping techniques. Keywords— DOM Tree, HTML structure, semi structured web pages, web scrapping and Web data extraction.",
"title": ""
},
{
"docid": "neg:1840133_7",
"text": "The CASAS architecture facilitates the development and implementation of future smart home technologies by offering an easy-to-install lightweight design that provides smart home capabilities out of the box with no customization or training.",
"title": ""
},
{
"docid": "neg:1840133_8",
"text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions' (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.",
"title": ""
},
{
"docid": "neg:1840133_9",
"text": "18-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET)/computed tomography (CT) is currently the most valuable imaging technique in Hodgkin lymphoma. Since its first use in lymphomas in the 1990s, it has become the gold standard in the staging and end-of-treatment remission assessment in patients with Hodgkin lymphoma. The possibility of using early (interim) PET during first-line therapy to evaluate chemosensitivity and thus personalize treatment at this stage holds great promise, and much attention is now being directed toward this goal. With high probability, it is believed that in the near future, the result of interim PET-CT would serve as a compass to optimize treatment. Also the role of PET in pre-transplant assessment is currently evolving. Much controversy surrounds the possibility of detecting relapse after completed treatment with the use of PET in surveillance in the absence of symptoms suggestive of recurrence and the results of published studies are rather discouraging because of low positive predictive value. This review presents current knowledge about the role of 18-FDG-PET/CT imaging at each point of management of patients with Hodgkin lymphoma.",
"title": ""
},
{
"docid": "neg:1840133_10",
"text": "As organizations scale up, their collective knowledge increases, and the potential for serendipitous collaboration between members grows dramatically. However, finding people with the right expertise or interests becomes much more difficult. Semi-structured social media, such as blogs, forums, and bookmarking, present a viable platform for collaboration-if enough people participate, and if shared content is easily findable. Within the trusted confines of an organization, users can trade anonymity for a rich identity that carries information about their role, location, and position in its hierarchy.\n This paper describes WaterCooler, a tool that aggregates shared internal social media and cross-references it with an organization's directory. We deployed WaterCooler in a large global enterprise and present the results of a preliminary user study. Despite the lack of complete social networking affordances, we find that WaterCooler changed users' perceptions of their workplace, made them feel more connected to each other and the company, and redistributed users' attention outside their own business groups.",
"title": ""
},
{
"docid": "neg:1840133_11",
"text": "In developing automated systems to recognize the emotional content of music, we are faced with a problem spanning two disparate domains: the space of human emotions and the acoustic signal of music. To address this problem, we must develop models for both data collected from humans describing their perceptions of musical mood and quantitative features derived from the audio signal. In previous work, we have presented a collaborative game, MoodSwings, which records dynamic (per-second) mood ratings from multiple players within the two-dimensional Arousal-Valence representation of emotion. Using this data, we present a system linking models of acoustic features and human data to provide estimates of the emotional content of music according to the arousal-valence space. Furthermore, in keeping with the dynamic nature of musical mood we demonstrate the potential of this approach to track the emotional changes in a song over time. We investigate the utility of a range of acoustic features based on psychoacoustic and music-theoretic representations of the audio for this application. Finally, a simplified version of our system is re-incorporated into MoodSwings as a simulated partner for single-players, providing a potential platform for furthering perceptual studies and modeling of musical mood.",
"title": ""
},
{
"docid": "neg:1840133_12",
"text": "A large class of computational problems involve the determination of properties of graphs, digraphs, integers, arrays of integers, finite families of finite sets, boolean formulas and elements of other countable domains. Through simple encodings from such domains into the set of words over a finite alphabet these problems can be converted into language recognition problems, and we can inquire into their computational complexity. It is reasonable to consider such a problem satisfactorily solved when an algorithm for its solution is found which terminates within a number of steps bounded by a polynomial in the length of the input. We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent, in the sense that either each of them possesses a polynomial-bounded algorithm or none of them does.",
"title": ""
},
{
"docid": "neg:1840133_13",
"text": "SIMD parallelism has become an increasingly important mechanism for delivering performance in modern CPUs, due its power efficiency and relatively low cost in die area compared to other forms of parallelism. Unfortunately, languages and compilers for CPUs have not kept up with the hardware's capabilities. Existing CPU parallel programming models focus primarily on multi-core parallelism, neglecting the substantial computational capabilities that are available in CPU SIMD vector units. GPU-oriented languages like OpenCL support SIMD but lack capabilities needed to achieve maximum efficiency on CPUs and suffer from GPU-driven constraints that impair ease of use on CPUs. We have developed a compiler, the Intel R® SPMD Program Compiler (ispc), that delivers very high performance on CPUs thanks to effective use of both multiple processor cores and SIMD vector units. ispc draws from GPU programming languages, which have shown that for many applications the easiest way to program SIMD units is to use a single-program, multiple-data (SPMD) model, with each instance of the program mapped to one SIMD lane. We discuss language features that make ispc easy to adopt and use productively with existing software systems and show that ispc delivers up to 35x speedups on a 4-core system and up to 240× speedups on a 40-core system for complex workloads (compared to serial C++ code).",
"title": ""
},
{
"docid": "neg:1840133_14",
"text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.",
"title": ""
},
{
"docid": "neg:1840133_15",
"text": "Remote work and intensive use of Information Technologies (IT) are increasingly common in organizations. At the same time, professional stress seems to develop. However, IS research has paid little attention to the relationships between these two phenomena. The purpose of this research in progress is to present a framework that introduces the influence of (1) new spatial and temporal constraints and of (2) intensive use of IT on employee emotions at work. Specifically, this paper relies on virtuality (e.g. Chudoba et al. 2005) and media richness (Daft and Lengel 1984) theories to determine the emotional consequences of geographically distributed work.",
"title": ""
},
{
"docid": "neg:1840133_16",
"text": "ion Description and Purpose Variable names Provide human readable names to data addresses Function names Provide human readable names to function addresses Control structures Eliminate ‘‘spaghetti’’ code (The ‘‘goto’’ statement is no longer necessary.) Argument passing Default argument values, keyword specification of arguments, variable length argument lists, etc. Data structures Allow conceptual organization of data Data typing Binds the type of the data to the type of the variable Static Insures program correctness, sacrificing generality. Dynamic Greater generality, sacrificing guaranteed correctness. Inheritance Allows creation of families of related types and easy re-use of common functionality Message dispatch Providing one name to multiple implementations of the same concept Single dispatch Dispatching to a function based on the run-time type of one argument Multiple dispatch Dispatching to a function based on the run-time type of multiple arguments. Predicate dispatch Dispatching to a function based on run-time state of arguments Garbage collection Automated memory management Closures Allow creation, combination, and use of functions as first-class values Lexical binding Provides access to values in the defining context Dynamic binding Provides access to values in the calling context (.valueEnvir in SC) Co-routines Synchronous cooperating processes Threads Asynchronous processes Lazy evaluation Allows the order of operations not to be specified. Infinitely long processes and infinitely large data structures can be specified and used as needed. Applying Language Abstractions to Computer Music The SuperCollider language provides many of the abstractions listed above. SuperCollider is a dynamically typed, single-inheritance, single-argument dispatch, garbage-collected, object-oriented language similar to Smalltalk (www.smalltalk.org). In SuperCollider, everything is an object, including basic types like letters and numbers. Objects in SuperCollider are organized into classes. The UGen class provides the abstraction of a unit generator, and the Synth class represents a group of UGens operating as a group to generate output. An instrument is constructed functionally. That is, when one writes a sound-processing function, one is actually writing a function that creates and connects unit generators. This is different from a procedural or static object specification of a network of unit generators. Instrument functions in SuperCollider can generate the network of unit generators using the full algorithmic capability of the language. For example, the following code can easily generate multiple versions of a patch by changing the values of the variables that specify the dimensions (number of exciters, number of comb delays, number of allpass delays). In a procedural language like Csound or a ‘‘wire-up’’ environment like Max, a different patch would have to be created for different values for the dimensions of the patch.",
"title": ""
},
{
"docid": "neg:1840133_17",
"text": "OBJECTIVE\nThe biologic basis for gender identity is unknown. Research has shown that the ratio of the length of the second and fourth digits (2D:4D) in mammals is influenced by biologic sex in utero, but data on 2D:4D ratios in transgender individuals are scarce and contradictory. We investigated a possible association between 2D:4D ratio and gender identity in our transgender clinic population in Albany, New York.\n\n\nMETHODS\nWe prospectively recruited 118 transgender subjects undergoing hormonal therapy (50 female to male [FTM] and 68 male to female [MTF]) for finger length measurement. The control group consisted of 37 cisgender volunteers (18 females, 19 males). The length of the second and fourth digits were measured using digital calipers. The 2D:4D ratios were calculated and analyzed with unpaired t tests.\n\n\nRESULTS\nFTM subjects had a smaller dominant hand 2D:4D ratio (0.983 ± 0.027) compared to cisgender female controls (0.998 ± 0.021, P = .029), but a ratio similar to control males (0.972 ± 0.036, P =.19). There was no difference in the 2D:4D ratio of MTF subjects (0.978 ± 0.029) compared to cisgender male controls (0.972 ± 0.036, P = .434).\n\n\nCONCLUSION\nOur findings are consistent with a biologic basis for transgender identity and the possibilities that FTM gender identity is affected by prenatal androgen activity but that MTF transgender identity has a different basis.\n\n\nABBREVIATIONS\n2D:4D = 2nd digit to 4th digit; FTM = female to male; MTF = male to female.",
"title": ""
},
{
"docid": "neg:1840133_18",
"text": "Wireless communication system is a heavy dense composition of signal processing techniques with semiconductor technologies. With the ever increasing system capacity and data rate, VLSI design and implementation method for wireless communications becomes more challenging, which urges researchers in signal processing to provide new architectures and efficient algorithms to meet low power and high performance requirements. This paper presents a survey of recent research, a development in VLSI architecture and signal processing algorithms with emphasis on wireless communication systems. It is shown that while contemporary signal processing can be directly applied to the communication hardware design including ASIC, SoC, and FPGA, much work remains to realize its full potential. It is concluded that an integrated combination of VLSI and signal processing technologies will provide more complete solutions.",
"title": ""
},
{
"docid": "neg:1840133_19",
"text": "Recycling today constitutes the most environmentally friendly method of managing wood waste. A large proportion of the wood waste generated consists of used furniture and other constructed wooden items, which are composed mainly of particleboard, a material which can potentially be reused. In the current research, four different hydrothermal treatments were applied in order to recover wood particles from laboratory particleboards and use them in the production of new (recycled) ones. Quality was evaluated by determining the main properties of the original (control) and the recycled boards. Furthermore, the impact of a second recycling process on the properties of recycled particleboards was studied. With the exception of the modulus of elasticity in static bending, all of the mechanical properties of the recycled boards tested decreased in comparison with the control boards. Furthermore, the recycling process had an adverse effect on their hygroscopic properties and a beneficial effect on the formaldehyde content of the recycled boards. The results indicated that when the 1st and 2nd particleboard recycling processes were compared, it was the 2nd recycling process that caused the strongest deterioration in the quality of the recycled boards. Further research is needed in order to explain the causes of the recycled board quality falloff and also to determine the factors in the recycling process that influence the quality degradation of the recycled boards.",
"title": ""
}
] |
1840134 | 25 Tweets to Know You: A New Model to Predict Personality with Social Media | [
{
"docid": "pos:1840134_0",
"text": "We analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age. In our open-vocabulary technique, the data itself drives a comprehensive exploration of language that distinguishes people, finding connections that are not captured with traditional closed-vocabulary word-category analyses. Our analyses shed new light on psychosocial processes yielding results that are face valid (e.g., subjects living in high elevations talk about the mountains), tie in with other research (e.g., neurotic people disproportionately use the phrase 'sick of' and the word 'depressed'), suggest new hypotheses (e.g., an active life implies emotional stability), and give detailed insights (males use the possessive 'my' when mentioning their 'wife' or 'girlfriend' more often than females use 'my' with 'husband' or 'boyfriend'). To date, this represents the largest study, by an order of magnitude, of language and personality.",
"title": ""
},
{
"docid": "pos:1840134_1",
"text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.",
"title": ""
}
] | [
{
"docid": "neg:1840134_0",
"text": "The OECD's Brain and Learning project (2002) emphasized that many misconceptions about the brain exist among professionals in the field of education. Though these so-called \"neuromyths\" are loosely based on scientific facts, they may have adverse effects on educational practice. The present study investigated the prevalence and predictors of neuromyths among teachers in selected regions in the United Kingdom and the Netherlands. A large observational survey design was used to assess general knowledge of the brain and neuromyths. The sample comprised 242 primary and secondary school teachers who were interested in the neuroscience of learning. It would be of concern if neuromyths were found in this sample, as these teachers may want to use these incorrect interpretations of neuroscience findings in their teaching practice. Participants completed an online survey containing 32 statements about the brain and its influence on learning, of which 15 were neuromyths. Additional data was collected regarding background variables (e.g., age, sex, school type). Results showed that on average, teachers believed 49% of the neuromyths, particularly myths related to commercialized educational programs. Around 70% of the general knowledge statements were answered correctly. Teachers who read popular science magazines achieved higher scores on general knowledge questions. More general knowledge also predicted an increased belief in neuromyths. These findings suggest that teachers who are enthusiastic about the possible application of neuroscience findings in the classroom find it difficult to distinguish pseudoscience from scientific facts. Possessing greater general knowledge about the brain does not appear to protect teachers from believing in neuromyths. This demonstrates the need for enhanced interdisciplinary communication to reduce such misunderstandings in the future and establish a successful collaboration between neuroscience and education.",
"title": ""
},
{
"docid": "neg:1840134_1",
"text": "Airline companies have increasingly employed electronic commerce (eCommerce) for strategic purposes, most notably in order to achieve long-term competitive advantage and global competitiveness by enhancing customer satisfaction as well as marketing efficacy and managerial efficiency. eCommerce has now emerged as possibly the most representative distribution channel in the airline industry. In this study, we describe an extended technology acceptance model (TAM), which integrates subjective norms and electronic trust (eTrust) into the model, in order to determine their relevance to the acceptance of airline business-to-customer (B2C) eCommerce websites (AB2CEWS). The proposed research model was tested empirically using data collected from a survey of customers who had utilized B2C eCommerce websites of two representative airline companies in South Korea (i.e., KAL and ASIANA) for the purpose of purchasing air tickets. Path analysis was employed in order to assess the significance and strength of the hypothesized causal relationships between subjective norms, eTrust, perceived ease of use, perceived usefulness, attitude toward use, and intention to reuse. Our results provide general support for an extended TAM, and also confirmed its robustness in predicting customers’ intention to reuse AB2CEWS. Valuable information was found from our results regarding the management of AB2CEWS in the formulation of airlines’ Internet marketing strategies. 2008 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "neg:1840134_2",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840134_3",
"text": "This paper presents a 9-bit subrange analog-to-digital converter (ADC) consisting of a 3.5-bit flash coarse ADC, a 6-bit successive-approximation-register (SAR) fine ADC, and a differential segmented capacitive digital-to-analog converter (DAC). The flash ADC controls the thermometer coarse capacitors of the DAC and the SAR ADC controls the binary fine ones. Both theoretical analysis and behavioral simulations show that the differential non-linearity (DNL) of a SAR ADC with a segmented DAC is better than that of a binary ADC. The merged switching of the coarse capacitors significantly enhances overall operation speed. At 150 MS/s, the ADC consumes 1.53 mW from a 1.2-V supply. The effective number of bits (ENOB) is 8.69 bits and the effective resolution bandwidth (ERBW) is 100 MHz. With a 1.3-V supply voltage, the sampling rate is 200 MS/s with 2.2-mW power consumption. The ENOB is 8.66 bits and the ERBW is 100 MHz. The FOMs at 1.3 V and 200 MS/s, 1.2 V and 150 MS/s and 1 V and 100 MS/s are 27.2, 24.7, and 17.7 fJ/conversion-step, respectively.",
"title": ""
},
{
"docid": "neg:1840134_4",
"text": "Research into antigay violence has been limited by a lack of attention to issues of gender presentation. Understanding gender nonconformity is important for addressing antigay prejudice and hate crimes. We assessed experiences of gender-nonconformity-related prejudice among 396 Black, Latino, and White lesbian, gay, and bisexual individuals recruited from diverse community venues in New York City. We assessed the prevalence and contexts of prejudice-related life events and everyday discrimination using both quantitative and qualitative approaches. Gender nonconformity had precipitated major prejudice events for 9% of the respondents and discrimination instances for 19%. Women were more likely than men to report gender-nonconformity-related discrimination but there were no differences by other demographic characteristics. In analysis of events narratives, we show that gender nonconformity prejudice is often intertwined with antigay prejudice. Our results demonstrate that both constructs should be included when addressing prejudice and hate crimes targeting lesbian, gay, bisexual, and transgender individuals and communities.",
"title": ""
},
{
"docid": "neg:1840134_5",
"text": "What do you do to start reading introduction to computing and programming in python a multimedia approach 2nd edition? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this introduction to computing and programming in python a multimedia approach 2nd edition.",
"title": ""
},
{
"docid": "neg:1840134_6",
"text": "Purpose – To develop a model that bridges the gap between CSR definitions and strategy and offers guidance to managers on how to connect socially committed organisations with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. Design/methodology/approach – This paper offers a critical evaluation of the theoretical foundations of corporate responsibility (CR) and proposes a new strategic approach to CR, which seeks to overcome the limitations of normative definitions. To address this perceived issue, the authors propose a new processual model of CR, which they refer to as the 3C-SR model. Findings – The 3C-SR model can offer practical guidelines to managers on how to connect with the growing numbers of ethically aware consumers to simultaneously achieve economic and social objectives. It is argued that many of the redefinitions of CR for a contemporary audience are normative exhortations (“calls to arms”) that fail to provide managers with the conceptual resources to move from “ought” to “how”. Originality/value – The 3C-SR model offers a novel approach to CR in so far as it addresses strategy, operations and markets in a single framework.",
"title": ""
},
{
"docid": "neg:1840134_7",
"text": "This paper presents a compact planar ultra-wideband (UWB) microstrip antenna for microwave medical applications. The proposed antenna has a low profile structure, consisting of a modified radiating patch with stair steps and open slots, microstrip feed line, and T-like shape slots at the ground plane. The optimized antenna is capable of being operated in frequency range of 3.06–11.4 GHz band having good omnidirectional radiation pattern and high gain, which satisfies the requirements of UWB (3.1–10.6 GHz) applications. The antenna system has a compact size of 18×30×0.8mm3. These features make the proposed UWB antenna a good candidate for microwave medical imaging applications.",
"title": ""
},
{
"docid": "neg:1840134_8",
"text": "High voltage pulse generators can be used effectively in water treatment applications, as applying a pulsed electric field on the infected sample guarantees killing of harmful germs and bacteria. In this paper, a new high voltage pulse generator with closed loop control on its output voltage is proposed. The proposed generator is based on DC-to-DC boost converter in conjunction with capacitor-diode voltage multiplier (CDVM), and can be fed from low-voltage low-frequency AC supply, i.e. utility mains. The proposed topology provides transformer-less operation which reduces size and enhances the overall efficiency. A Detailed design of the proposed pulse generator has been presented as well. The proposed approach is validated by simulation as well as experimental results.",
"title": ""
},
{
"docid": "neg:1840134_9",
"text": "Biomedical Image Processing is a growing and demanding field. It comprises of many different types of imaging methods likes CT scans, X-Ray and MRI. These techniques allow us to identify even the smallest abnormalities in the human body. The primary goal of medical imaging is to extract meaningful and accurate information from these images with the least error possible. Out of the various types of medical imaging processes available to us, MRI is the most reliable and safe. It does not involve exposing the body to any sorts of harmful radiation. This MRI can then be processed, and the tumor can be segmented. Tumor Segmentation includes the use of several different techniques. The whole process of detecting brain tumor from an MRI can be classified into four different categories: Pre-Processing, Segmentation, Optimization and Feature Extraction. This survey involves reviewing the research by other professionals and compiling it into one paper.",
"title": ""
},
{
"docid": "neg:1840134_10",
"text": "Data sparsity is one of the most challenging problems for recommender systems. One promising solution to this problem is cross-domain recommendation, i.e., leveraging feedbacks or ratings from multiple domains to improve recommendation performance in a collective manner. In this paper, we propose an Embedding and Mapping framework for Cross-Domain Recommendation, called EMCDR. The proposed EMCDR framework distinguishes itself from existing crossdomain recommendation models in two aspects. First, a multi-layer perceptron is used to capture the nonlinear mapping function across domains, which offers high flexibility for learning domain-specific features of entities in each domain. Second, only the entities with sufficient data are used to learn the mapping function, guaranteeing its robustness to noise caused by data sparsity in single domain. Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms stateof-the-art cross-domain recommendation methods.",
"title": ""
},
{
"docid": "neg:1840134_11",
"text": "For automatic driving, vehicles must be able to recognize their environment and take control of the vehicle. The vehicle must perceive relevant objects, which includes other traffic participants as well as infrastructure information, assess the situation and generate appropriate actions. This work is a first step of integrating previous works on environment perception and situation analysis toward automatic driving strategies. We present a method for automatic cruise control of vehicles in urban environments. The longitudinal velocity is influenced by the speed limit, the curvature of the lane, the state of the next traffic light and the most relevant target on the current lane. The necessary acceleration is computed in respect to the information which is estimated by an instrumented vehicle.",
"title": ""
},
{
"docid": "neg:1840134_12",
"text": "Archaeological remote sensing is not a novel discipline. Indeed, there is already a suite of geoscientific techniques that are regularly used by practitioners in the field, according to standards and best practice guidelines. However, (i) the technological development of sensors for data capture; (ii) the accessibility of new remote sensing and Earth Observation data; and (iii) the awareness that a combination of different techniques can lead to retrieval of diverse and complementary information to characterize landscapes and objects of archaeological value and significance, are currently three triggers stimulating advances in methodologies for data acquisition, signal processing, and the integration and fusion of extracted information. The Special Issue “Remote Sensing and Geosciences for Archaeology” therefore presents a collection of scientific contributions that provides a sample of the state-of-the-art and forefront research in this field. Site discovery, understanding of cultural landscapes, augmented knowledge of heritage, condition assessment, and conservation are the main research and practice targets that the papers published in this Special Issue aim to address.",
"title": ""
},
{
"docid": "neg:1840134_13",
"text": "A MIMO antenna of size 40mm × 40mm × 1.6mm is proposed for WLAN applications. Antenna consists of four mushroom shaped Apollonian fractal planar monopoles having micro strip feed lines with edge feeding. It uses defective ground structure (DGS) to achieve good isolation. To achieve more isolation, the antenna elements are placed orthogonal to each other. Further, isolation can be increased using parasitic elements between the elements of antenna. Simulation is done to study reflection coefficient as well as coupling between input ports, directivity, peak gain, efficiency, impedance and VSWR. Results show that MIMO antenna has a bandwidth of 1.9GHZ ranging from 5 to 6.9 GHz, and mutual coupling of less than -20dB.",
"title": ""
},
{
"docid": "neg:1840134_14",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "neg:1840134_15",
"text": "Although many critics are reluctant to accept the trustworthiness of qualitative research, frameworks for ensuring rigour in this form of work have been in existence for many years. Guba’s constructs, in particular, have won considerable favour and form the focus of this paper. Here researchers seek to satisfy four criteria. In addressing credibility, investigators attempt to demonstrate that a true picture of the phenomenon under scrutiny is being presented. To allow transferability, they provide sufficient detail of the context of the fieldwork for a reader to be able to decide whether the prevailing environment is similar to another situation with which he or she is familiar and whether the findings can justifiably be applied to the other setting. The meeting of the dependability criterion is difficult in qualitative work, although researchers should at least strive to enable a future investigator to repeat the study. Finally, to achieve confirmability, researchers must take steps to demonstrate that findings emerge from the data and not their own predispositions. The paper concludes by suggesting that it is the responsibility of research methods teachers to ensure that this or a comparable model for ensuring trustworthiness is followed by students undertaking a qualitative inquiry.",
"title": ""
},
{
"docid": "neg:1840134_16",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
},
{
"docid": "neg:1840134_17",
"text": "With the explosive growth of microblogging services, short-text messages (also known as tweets) are being created and shared at an unprecedented rate. Tweets in its raw form can be incredibly informative, but also overwhelming. For both end-users and data analysts it is a nightmare to plow through millions of tweets which contain enormous noises and redundancies. In this paper, we study continuous tweet summarization as a solution to address this problem. While traditional document summarization methods focus on static and small-scale data, we aim to deal with dynamic, quickly arriving, and large-scale tweet streams. We propose a novel prototype called Sumblr (SUMmarization By stream cLusteRing) for tweet streams. We first propose an online tweet stream clustering algorithm to cluster tweets and maintain distilled statistics called Tweet Cluster Vectors. Then we develop a TCV-Rank summarization technique for generating online summaries and historical summaries of arbitrary time durations. Finally, we describe a topic evolvement detection method, which consumes online and historical summaries to produce timelines automatically from tweet streams. Our experiments on large-scale real tweets demonstrate the efficiency and effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840134_18",
"text": "The brain which is composed of more than 100 billion nerve cells is a sophisticated biochemical factory. For many years, neurologists, psychotherapists, researchers, and other health care professionals have studied the human brain. With the development of computer and information technology, it makes brain complex spectrum analysis to be possible and opens a highlight field for the study of brain science. In the present work, observation and exploring study of the activities of brain under brainwave music stimulus are systemically made by experimental and spectrum analysis technology. From our results, the power of the 10.5Hz brainwave appears in the experimental figures, it was proved that upper alpha band is entrained under the special brainwave music. According to the Mozart effect and the analysis of improving memory performance, the results confirm that upper alpha band is indeed related to the improvement of learning efficiency.",
"title": ""
},
{
"docid": "neg:1840134_19",
"text": "PURPOSE\nTo compare a developmental indirect resin composite with an established, microfilled directly placed resin composite used to restore severely worn teeth. The cause of the tooth wear was a combination of erosion and attrition.\n\n\nMATERIALS AND METHODS\nOver a 3-year period, a total of 32 paired direct or indirect microfilled resin composite restorations were placed on premolars and molars in 16 patients (mean age: 43 years, range: 25 to 62) with severe tooth wear. A further 26 pairs of resin composite were placed in 13 controls (mean age: 39 years, range 28 to 65) without evidence of tooth wear. The material was randomly selected for placement in the left or right sides of the mouth.\n\n\nRESULTS\nSixteen restorations were retained in the tooth wear group (7 indirect and 9 direct), 7 (22%) fractured (4 indirect and 3 direct), and 9 (28%) were completely lost (5 indirect and 4 direct). There was no statistically significant difference in failure rates between the materials in this group. The control group had 21 restorations (80%) that were retained (10 indirect and 12 direct), a significantly lower rate of failure than in the tooth wear patients (P = .027).\n\n\nCONCLUSION\nThe results of this short-term study suggest that the use of direct and indirect resin composites for restoring worn posterior teeth is contraindicated.",
"title": ""
}
] |
1840135 | WHICH TYPE OF MOTIVATION IS CAPABLE OF DRIVING ACHIEVEMENT BEHAVIORS SUCH AS EXERCISE IN DIFFERENT PERSONALITIES? BY RAJA AMJOD | [
{
"docid": "pos:1840135_0",
"text": "The standard life events methodology for the prediction of psychological symptoms was compared with one focusing on relatively minor events, namely, the hassles and uplifts of everyday life. Hassles and Uplifts Scales were constructed and administered once a month for 10 consecutive months to a community sample of middle-aged adults. It was found that the Hassles Scale was a better predictor of concurrent and subsequent psychological symptoms than were the life events scores, and that the scale shared most of the variance in symptoms accounted for by life events. When the effects of life events scores were removed, hassles and symptoms remained significantly correlated. Uplifts were positively related to symptoms for women but not for men. Hassles and uplifts were also shown to be related, although only modestly so, to positive and negative affect, thus providing discriminate validation for hassles and uplifts in comparison to measures of emotion. It was concluded that the assessment of daily hassles and uplifts may be a better approach to the prediction of adaptational outcomes than the usual life events approach.",
"title": ""
}
] | [
{
"docid": "neg:1840135_0",
"text": "In this paper, we describe our participation at the subtask of extraction of relationships between two identified keyphrases. This task can be very helpful in improving search engines for scientific articles. Our approach is based on the use of a convolutional neural network (CNN) trained on the training dataset. This deep learning model has already achieved successful results for the extraction relationships between named entities. Thus, our hypothesis is that this model can be also applied to extract relations between keyphrases. The official results of the task show that our architecture obtained an F1-score of 0.38% for Keyphrases Relation Classification. This performance is lower than the expected due to the generic preprocessing phase and the basic configuration of the CNN model, more complex architectures are proposed as future work to increase the classification rate.",
"title": ""
},
{
"docid": "neg:1840135_1",
"text": "We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax. The results are correct for viewpoints that are static or moving, far away or nearby. Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: First, it is converted into an ordinary texture using a surprisingly simple 1-D forward transform. The resulting texture is then mapped onto the polygon using standard texture mapping. The 1-D warping functions work in texture coordinates to handle the parallax and visibility changes that result from the 3-D shape of the displacement surface. The subsequent texture-mapping operation handles the transformation from texture to screen coordinates.",
"title": ""
},
{
"docid": "neg:1840135_2",
"text": "Improved sensors in the automotive field are leading to multi-object tracking of extended objects becoming more and more important for advanced driver assistance systems and highly automated driving. This paper proposes an approach that combines a PHD filter for extended objects, viz. objects that originate multiple measurements while also estimating the shape of the objects via constructing an object-local occupancy grid map and then extracting a polygonal chain. This allows tracking even in traffic scenarios where unambiguous segmentation of measurements is difficult or impossible. In this work, this is achieved using multiple segmentation assumptions by applying different parameter sets for the DBSCAN clustering algorithm. The proposed algorithm is evaluated using simulated data and real sensor data from a test track including highly accurate D-GPS and IMU data as a ground truth.",
"title": ""
},
{
"docid": "neg:1840135_3",
"text": "Gallium-67 citrate is currently considered as the tracer of first choice in the diagnostic workup of fever of unknown origin (FUO). Fluorine-18 2'-deoxy-2-fluoro-D-glucose (FDG) has been shown to accumulate in malignant tumours but also in inflammatory processes. The aim of this study was to prospectively evaluate FDG imaging with a double-head coincidence camera (DHCC) in patients with FUO in comparison with planar and single-photon emission tomography (SPET) 67Ga citrate scanning. Twenty FUO patients underwent FDG imaging with a DHCC which included transaxial and longitudinal whole-body tomography. In 18 of these subjects, 67Ga citrate whole-body and SPET imaging was performed. The 67Ga citrate and FDG images were interpreted by two investigators, both blinded to the results of other diagnostic modalities. Forty percent (8/20) of the patients had infection, 25% (5/20) had auto-immune diseases, 10% (2/20) had neoplasms and 15% (3/20) had other diseases. Fever remained unexplained in 10% (2/20) of the patients. Of the 20 patients studied, FDG imaging was positive and essentially contributed to the final diagnosis in 11 (55%). The sensitivity of transaxial FDG tomography in detecting the focus of fever was 84% and the specificity, 86%. Positive and negative predictive values were 92% and 75%, respectively. If the analysis was restricted to the 18 patients who were investigated both with 67Ga citrate and FDG, sensitivity was 81% and specificity, 86%. Positive and negative predictive values were 90% and 75%, respectively. The diagnostic accuracy of whole-body FDG tomography (again restricted to the aforementioned 18 patients) was lower (sensitivity, 36%; specificity, 86%; positive and negative predictive values, 80% and 46%, respectively). 67Ga citrate SPET yielded a sensitivity of 67% in detecting the focus of fever and a specificity of 78%. Positive and negative predictive values were 75% and 70%, respectively. A low sensitivity (45%), but combined with a high specificity (100%), was found in planar 67Ga imaging. Positive and negative predictive values were 100% and 54%, respectively. It is concluded that in the context of FUO, transaxial FDG tomography performed with a DHCC is superior to 67Ga citrate SPET. This seems to be the consequence of superior tracer kinetics of FDG compared with those of 67Ga citrate and of a better spatial resolution of a DHCC system compared with SPET imaging. In patients with FUO, FDG imaging with either dedicated PET or DHCC should be considered the procedure of choice.",
"title": ""
},
{
"docid": "neg:1840135_4",
"text": "Visual SLAM systems aim to estimate the motion of a moving camera together with the geometric structure and appearance of the world being observed. To the extent that this is possible using only an image stream, the core problem that must be solved by any practical visual SLAM system is that of obtaining correspondence throughout the images captured. Modern visual SLAM pipelines commonly obtain correspondence by using sparse feature matching techniques and construct maps using a composition of point, line or other simple geometric primitives. The resulting sparse feature map representations provide sparsely furnished, incomplete reconstructions of the observed scene. Related techniques from multiple view stereo (MVS) achieve high quality dense reconstruction by obtaining dense correspondences over calibrated image sequences. Despite the usefulness of the resulting dense models, these techniques have been of limited use in visual SLAM systems. The computational complexity of estimating dense surface geometry has been a practical barrier to its use in real-time SLAM. Furthermore, MVS algorithms have typically required a fixed length, calibrated image sequence to be available throughout the optimisation — a condition fundamentally at odds with the online nature of SLAM. With the availability of massively-parallel commodity computing hardware, we demonstrate new algorithms that achieve high quality incremental dense reconstruction within online visual SLAM. The result is a live dense reconstruction (LDR) of scenes that makes possible numerous applications that can utilise online surface modelling, for instance: planning robot interactions with unknown objects, augmented reality with characters that interact with the scene, or providing enhanced data for object recognition. The core of this thesis goes beyond LDR to demonstrate fully dense visual SLAM. We replace the sparse feature map representation with an incrementally updated, non-parametric, dense surface model. By enabling real-time dense depth map estimation through novel short baseline MVS, we can continuously update the scene model and further leverage its predictive capabilities to achieve robust camera pose estimation with direct whole image alignment. We demonstrate the capabilities of dense visual SLAM using a single moving passive camera, and also when real-time surface measurements are provided by a commodity depth camera. The results demonstrate state-of-the-art, pick-up-and-play 3D reconstruction and camera tracking systems useful in many real world scenarios. Acknowledgements There are key individuals who have provided me with all the support and tools that a student who sets out on an adventure could want. Here, I wish to acknowledge those friends and colleagues, that by providing technical advice or much needed fortitude, helped bring this work to life. Prof. Andrew Davison’s robot vision lab provides a unique research experience amongst computer vision labs in the world. First and foremost, I thank my supervisor Andy for giving me the chance to be part of that experience. His brilliant guidance and support of my growth as a researcher are well matched by his enthusiasm for my work. This is made most clear by his fostering the joy of giving live demonstrations of work in progress. His complete faith in my ability drove me on and gave me license to develop new ideas and build bridges to research areas that we knew little about. Under his guidance I’ve been given every possible opportunity to develop my research interests, and this thesis would not be possible without him. My appreciation for Prof. Murray Shanahan’s insights and spirit began with our first conversation. Like ripples from a stone cast into a pond, the presence of his ideas and depth of knowledge instantly propagated through my mind. His enthusiasm and capacity to discuss any topic, old or new to him, and his ability to bring ideas together across the worlds of science and philosophy, showed me an openness to thought that I continue to try to emulate. I am grateful to Murray for securing a generous scholarship for me in the Department of Computing and for providing a home away from home in his cognitive robotics lab. I am indebted to Prof. Owen Holland who introduced me to the world of research at the University of Essex. Owen showed me a first glimpse of the breadth of ideas in robotics, AI, cognition and beyond. I thank Owen for introducing me to the idea of continuing in academia for a doctoral degree and for introducing me to Murray. I have learned much with many friends and colleagues at Imperial College, but there are three who have been instrumental. I thank Steven Lovegrove, Ankur Handa and Renato Salas-Moreno who travelled with me on countless trips into the unknown, sometimes to chase a small concept but more often than not in pursuit of the bigger picture we all wanted to see. They indulged me with months of exploration, collaboration and fun, leading to us understand ideas and techniques that were once out of reach. Together, we were able to learn much more. Thank you Hauke Strasdatt, Luis Pizarro, Jan Jachnick, Andreas Fidjeland and members of the robot vision and cognitive robotics labs for brilliant discussions and for sharing the",
"title": ""
},
{
"docid": "neg:1840135_5",
"text": "A model of positive psychological functioning that emerges from diverse domains of theory and philosophy is presented. Six key dimensions of wellness are defined, and empirical research summarizing their empirical translation and sociodemographic correlates is presented. Variations in well-being are explored via studies of discrete life events and enduring human experiences. Life histories of the psychologically vulnerable and resilient, defined via the cross-classification of depression and well-being, are summarized. Implications of the focus on positive functioning for research on psychotherapy, quality of life, and mind/body linkages are reviewed.",
"title": ""
},
{
"docid": "neg:1840135_6",
"text": "Geobacter sulfurreducens is a well-studied representative of the Geobacteraceae, which play a critical role in organic matter oxidation coupled to Fe(III) reduction, bioremediation of groundwater contaminated with organics or metals, and electricity production from waste organic matter. In order to investigate G. sulfurreducens central metabolism and electron transport, a metabolic model which integrated genome-based predictions with available genetic and physiological data was developed via the constraint-based modeling approach. Evaluation of the rates of proton production and consumption in the extracellular and cytoplasmic compartments revealed that energy conservation with extracellular electron acceptors, such as Fe(III), was limited relative to that associated with intracellular acceptors. This limitation was attributed to lack of cytoplasmic proton consumption during reduction of extracellular electron acceptors. Model-based analysis of the metabolic cost of producing an extracellular electron shuttle to promote electron transfer to insoluble Fe(III) oxides demonstrated why Geobacter species, which do not produce shuttles, have an energetic advantage over shuttle-producing Fe(III) reducers in subsurface environments. In silico analysis also revealed that the metabolic network of G. sulfurreducens could synthesize amino acids more efficiently than that of Escherichia coli due to the presence of a pyruvate-ferredoxin oxidoreductase, which catalyzes synthesis of pyruvate from acetate and carbon dioxide in a single step. In silico phenotypic analysis of deletion mutants demonstrated the capability of the model to explore the flexibility of G. sulfurreducens central metabolism and correctly predict mutant phenotypes. These results demonstrate that iterative modeling coupled with experimentation can accelerate the understanding of the physiology of poorly studied but environmentally relevant organisms and may help optimize their practical applications.",
"title": ""
},
{
"docid": "neg:1840135_7",
"text": "Results from a new experiment in the Philippines shed light on the effects of voter information on vote buying and incumbent advantage. The treatment provided voters with information about the existence of a major spending program and the proposed allocations and promises of mayoral candidates just prior municipal elections. It left voters more knowledgeable about candidates’ proposed policies and increased the salience of spending. Treated voters were more likely to be targeted for vote buying. We develop a model of vote buying that accounts for these results. The information we provided attenuated incumbent advantage, prompting incumbents to increase their vote buying in response. Consistent with this explanation, both knowledge and vote buying impacts were higher in incumbent-dominated municipalities. Our findings show that, in a political environment where vote buying is the currency of electoral mobilization, incumbent efforts to increase voter welfare may take the form of greater vote buying. ∗This project would not have been possible without the support and cooperation of PPCRV volunteers in Ilocos Norte and Ilocos Sur. We are grateful to Michael Davidson for excellent research assistance and to Prudenciano Gordoncillo and the UPLB team for collecting the data. We thank Marcel Fafchamps, Clement Imbert, Pablo Querubin, Simon Quinn and two anonymous reviewers for constructive comments on the pre-analysis plan. Pablo Querubin graciously shared his precinct-level data from the 2010 elections with us. We thank conference and seminar participants at Gothenburg, Copenhagen, and Oxford for comments. The project received funding from the World Bank and ethics approval from the University of Oxford Economics Department (Econ DREC Ref. No. 1213/0014). All remaining errors are ours. The opinions and conclusions expressed here are those of the authors and not those of the World Bank or the Inter-American Development Bank. †University of British Columbia; email: cesi.cruz@ubc.ca ‡Inter-American Development Bank; email: pkeefer@iadb.org §Oxford University; email: julien.labonne@bsg.ox.ac.uk",
"title": ""
},
{
"docid": "neg:1840135_8",
"text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?",
"title": ""
},
{
"docid": "neg:1840135_9",
"text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.",
"title": ""
},
{
"docid": "neg:1840135_10",
"text": "Despite decades of research, the roles of climate and humans in driving the dramatic extinctions of large-bodied mammals during the Late Quaternary remain contentious. We use ancient DNA, species distribution models and the human fossil record to elucidate how climate and humans shaped the demographic history of woolly rhinoceros, woolly mammoth, wild horse, reindeer, bison and musk ox. We show that climate has been a major driver of population change over the past 50,000 years. However, each species responds differently to the effects of climatic shifts, habitat redistribution and human encroachment. Although climate change alone can explain the extinction of some species, such as Eurasian musk ox and woolly rhinoceros, a combination of climatic and anthropogenic effects appears to be responsible for the extinction of others, including Eurasian steppe bison and wild horse. We find no genetic signature or any distinctive range dynamics distinguishing extinct from surviving species, underscoring the challenges associated with predicting future responses of extant mammals to climate and human-mediated habitat change. Toward the end of the Late Quaternary, beginning c. 50,000 years ago, Eurasia and North America lost c. 36% and 72% of their large-bodied mammalian genera (megafauna), respectively1. The debate surrounding the potential causes of these extinctions has focused primarily on the relative roles of climate and humans2,3,4,5. In general, the proportion of species that went extinct was greatest on continents that experienced the most dramatic Correspondence and requests for materials should be addressed to E.W (ewillerslev@snm.ku.dk). *Joint first authors †Deceased Supplementary Information is linked to the online version of the paper at www.nature.com/nature. Author contributions E.W. initially conceived and headed the overall project. C.R. headed the species distribution modelling and range measurements. E.D.L. and J.T.S. extracted, amplified and sequenced the reindeer DNA sequences. J.B. extracted, amplified and sequenced the woolly rhinoceros DNA sequences; M.H. generated part of the woolly rhinoceros data. J.W., K-P.K., J.L. and R.K.W. generated the horse DNA sequences; A.C. generated part of the horse data. L.O., E.D.L. and B.S. analysed the genetic data, with input from R.N., K.M., M.A.S. and S.Y.W.H. Palaeoclimate simulations were provided by P.B., A.M.H, J.S.S. and P.J.V. The directly-dated spatial LAT/LON megafauna locality information was collected by E.D.L., K.A.M., D.N.-B., D.B. and A.U.; K.A.M. and D.N-B performed the species distribution modelling and range measurements. M.B. carried out the gene-climate correlation. A.U. and D.B. assembled the human Upper Palaeolithic sites from Eurasia. T.G. and K.E.G. assembled the archaeofaunal assemblages from Siberia. A.U. analysed the spatial overlap of humans and megafauna and the archaeofaunal assemblages. E.D.L., L.O., B.S., K.A.M., D.N.-B., M.K.B., A.U., T.G. and K.E.G. wrote the Supplementary Information. D.F., G.Z., T.W.S., K.A-S., G.B., J.A.B., D.L.J., P.K., T.K., X.L., L.D.M., H.G.M., D.M., M.M., E.S., M.S., R.S.S., T.S., E.S., A.T., R.W., A.C. provided the megafauna samples used for ancient DNA analysis. E.D.L. made the figures. E.D.L, L.O. and E.W. wrote the majority of the manuscript, with critical input from B.S., M.H., K.A.M., M.T.P.G., C.R., R.K.W, A.U. and the remaining authors. Mitochondrial DNA sequences have been deposited in GenBank under accession numbers JN570760-JN571033. Reprints and permissions information is available at www.nature.com/reprints. NIH Public Access Author Manuscript Nature. Author manuscript; available in PMC 2014 June 25. Published in final edited form as: Nature. ; 479(7373): 359–364. doi:10.1038/nature10574. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript climatic changes6, implying a major role of climate in species loss. However, the continental pattern of megafaunal extinctions in North America approximately coincides with the first appearance of humans, suggesting a potential anthropogenic contribution to species extinctions3,5. Demographic trajectories of different taxa vary widely and depend on the geographic scale and methodological approaches used3,5,7. For example, genetic diversity in bison8,9, musk ox10 and European cave bear11 declines gradually from c. 50–30,000 calendar years ago (ka BP). In contrast, sudden losses of genetic diversity are observed in woolly mammoth12,13 and cave lion14 long before their extinction, followed by genetic stability until the extinction events. It remains unresolved whether the Late Quaternary extinctions were a cross-taxa response to widespread climatic or anthropogenic stressors, or were a species-specific response to one or both factors15,16. Additionally, it is unclear whether distinctive genetic signatures or geographic range-size dynamics characterise extinct or surviving species— questions of particular importance to the conservation of extant species. To disentangle the processes underlying population dynamics and extinction, we investigate the demographic histories of six megafauna herbivores of the Late Quaternary: woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), horse (wild Equus ferus and living domestic Equus caballus), reindeer/caribou (Rangifer tarandus), bison (Bison priscus/Bison bison) and musk ox (Ovibos moschatus). These taxa were characteristic of Late Quaternary Eurasia and/or North America and represent both extinct and extant species. Our analyses are based on 846 radiocarbon-dated mitochondrial DNA (mtDNA) control region sequences, 1,439 directly-dated megafaunal remains, and 6,291 radiocarbon determinations associated with Upper Palaeolithic human occupations in Eurasia. We reconstruct the demographic histories of the megafauna herbivores from ancient DNA data, model past species distributions and determine the geographic overlap between humans and megafauna over the last 50,000 years. We use these data to investigate how climate change and anthropogenic impacts affected species dynamics at continental and global scales, and contributed to in the extinction of some species and the survival of others. Effects of climate change differ across species and continents The direct link between climate change, population size and species extinctions is difficult to document10. However, population size is likely controlled by the amount of available habitat and is indicated by the geographic range of a species17,18. We assessed the role of climate using species distribution models, dated megafauna fossil remains and palaeoclimatic data on temperature and precipitation. We estimated species range sizes at the time periods of 42, 30, 21 and 6 ka BP as a proxy for habitat availability (Fig. 1; Supplementary Information section S1). Range size dynamics were then compared to demographic histories inferred from ancient DNA using three distinct analyses (Supplementary Information section S3): (i) coalescent-based estimation of changes in effective population size through time (Bayesian skyride19), which allows detection of changes in global genetic diversity; (ii) serial coalescent simulation followed by Approximate Bayesian Computation, which selects among different models describing continental population dynamics; and (iii) isolation-by-distance analysis, which estimates Lorenzen et al. Page 2 Nature. Author manuscript; available in PMC 2014 June 25. N IH -P A A uhor M anscript N IH -P A A uhor M anscript N IH -P A A uhor M anscript potential population structure and connectivity within continents. If climate was a major factor driving species population sizes, we would expect expansion and contraction of a species’ geographic range to mirror population increase and decline, respectively. We find a positive correlation between changes in the size of available habitat and genetic diversity for the four species—horse, reindeer, bison and musk ox—for which we have range estimates spanning all four time-points (the correlation is not statistically significant for reindeer: p = 0.101) (Fig. 2; Supplementary Information section S4). Hence, species distribution modelling based on fossil distributions and climate data are congruent with estimates of effective population size based on ancient DNA data, even in species with very different life-history traits. We conclude that climate has been a major driving force in megafauna population changes over the past 50,000 years. It is noteworthy that both estimated modelled ranges and genetic data are derived from a subset of the entire fossil record (Supplementary Information sections S1 and S3). Thus, changes in effective population size and range size may change with the addition of more data, especially from outside the geographical regions covered by the present study. However, we expect that the reported positive correlation will prevail when congruent data are compared. The best-supported models of changes in effective population size in North America and Eurasia during periods of dramatic climatic change during the past 50,000 years are those in which populations increase in size (Fig. 3, Supplementary Information section S3). This is true for all taxa except bison. However, the timing is not synchronous across populations. Specifically, we find highest support for population increase beginning c. 34 ka BP in Eurasian horse, reindeer and musk ox (Fig. 3a). Eurasian mammoth and North American horse increase prior to the Last Glacial Maximum (LGM) c. 26 ka BP. Models of population increase in woolly rhinoceros and North American mammoth fit equally well before and after the LGM, and North American reindeer populations increase later still. Only North American bison shows a population decline (Fig. 3b), the intensity of which likely swamps the signal of global population increase starting at c. 35 ka BP identified in the skyride plot",
"title": ""
},
{
"docid": "neg:1840135_11",
"text": "The Open Provenance Model is a model of provenance that is designed to meet the following requirements: (1) To allow provenance information to be exchanged between systems, by means of a compatibility layer based on a shared provenance model. (2) To allow developers to build and share tools that operate on such a provenance model. (3) To define provenance in a precise, technology-agnostic manner. (4) To support a digital representation of provenance for any “thing”, whether produced by computer systems or not. (5) To allow multiple levels of description to coexist. (6) To define a core set of rules that identify the valid inferences that can be made on provenance representation. This document contains the specification of the Open Provenance Model (v1.1) resulting from a community effort to achieve inter-operability in the Provenance Challenge series.",
"title": ""
},
{
"docid": "neg:1840135_12",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "neg:1840135_13",
"text": "The ImageCLEF’s plant identification task provides a testbed for the system-oriented evaluation of plant identification, more precisely on the 126 tree species identification based on leaf images. Three types of image content are considered: Scan, Scan-like (leaf photographs with a white uniform background), and Photograph (unconstrained leaf with natural background). The main originality of this data is that it was specifically built through a citizen sciences initiative conducted by Tela Botanica, a French social network of amateur and expert botanists. This makes the task closer to the conditions of a real-world application. This overview presents more precisely the resources and assessments of task, summarizes the retrieval approaches employed by the participating groups, and provides an analysis of the main evaluation results. With a total of eleven groups from eight countries and with a total of 30 runs submitted, involving distinct and original methods, this second year pilot task confirms Image Retrieval community interest for biodiversity and botany, and highlights further challenging studies in plant identification.",
"title": ""
},
{
"docid": "neg:1840135_14",
"text": "Detecting fluid emissions (e.g. urination or leaks) that extend beyond containment systems (e.g. diapers or adult pads) is a cause of concern for users and developers of wearable fluid containment products. Immediate, automated detection would allow users to address the situation quickly, preventing medical conditions such as adverse skin effects and avoiding embarrassment. For product development, fluid emission detection systems would enable more accurate and efficient lab and field evaluation of absorbent products. This paper describes the development of a textile-based fluid-detection sensing method that uses a multi-layer \"keypad matrix\" sensing paradigm using stitched conductive threads. Bench characterization tests determined the effects of sensor spacing, spacer fabric property, and contact pressures on wetness detection for a 5mL minimum benchmark fluid volume. The sensing method and bench-determined requirements were then applied in a close-fitting torso garment for babies that fastens at the crotch (onesie) that is able to detect diaper leakage events. Mannequin testing of the resulting garment confirmed the ability of using wetness sensing timing to infer location of induced 5 mL leaks.",
"title": ""
},
{
"docid": "neg:1840135_15",
"text": "In the last few years, deep convolutional neural networks have become ubiquitous in computer vision, achieving state-of-the-art results on problems like object detection, semantic segmentation, and image captioning. However, they have not yet been widely investigated in the document analysis community. In this paper, we present a word spotting system based on convolutional neural networks. We train a network to extract a powerful image representation, which we then embed into a word embedding space. This allows us to perform word spotting using both query-by-string and query-by-example in a variety of word embedding spaces, both learned and handcrafted, for verbatim as well as semantic word spotting. Our novel approach is versatile and the evaluation shows that it outperforms the previous state-of-the-art for word spotting on standard datasets.",
"title": ""
},
{
"docid": "neg:1840135_16",
"text": "In this paper, we describe COLABA, a large effort to create resources and processing tools for Dialectal Arabic Blogs. We describe the objectives of the project, the process flow and the interaction between the different components. We briefly describe the manual annotation effort and the resources created. Finally, we sketch how these resources and tools are put together to create DIRA, a termexpansion tool for information retrieval over dialectal Arabic collections using Modern Standard Arabic queries.",
"title": ""
},
{
"docid": "neg:1840135_17",
"text": "People are increasingly required to disclose personal information to computerand Internetbased systems in order to register, identify themselves or simply for the system to work as designed. In the present paper, we outline two different methods to easily measure people’s behavioral self-disclosure to web-based forms. The first, the use of an ‘I prefer not to say’ option to sensitive questions is shown to be responsive to the manipulation of level of privacy concern by increasing the salience of privacy issues, and to experimental manipulations of privacy. The second, blurring or increased ambiguity was used primarily by males in response to an income question in a high privacy condition. Implications for the study of self-disclosure in human–computer interaction and web-based research are discussed. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840135_18",
"text": "T strategic use of first-party content by two-sided platforms is driven by two key factors: the nature of buyer and seller expectations (favorable versus unfavorable) and the nature of the relationship between first-party content and third-party content (complements or substitutes). Platforms facing unfavorable expectations face an additional constraint: their prices and first-party content investment need to be such that low (zero) participation equilibria are eliminated. This additional constraint typically leads them to invest more (less) in first-party content relative to platforms facing favorable expectations when firstand third-party content are substitutes (complements). These results hold with both simultaneous and sequential entry of the two sides. With two competing platforms—incumbent facing favorable expectations and entrant facing unfavorable expectations— and multi-homing on one side of the market, the incumbent always invests (weakly) more in first-party content relative to the case in which it is a monopolist.",
"title": ""
},
{
"docid": "neg:1840135_19",
"text": "Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.",
"title": ""
}
] |
1840136 | Recurrent Neural Network Postfilters for Statistical Parametric Speech Synthesis | [
{
"docid": "pos:1840136_0",
"text": "This paper derives a speech parameter generation algorithm for HMM-based speech synthesis, in which speech parameter sequence is generated from HMMs whose observation vector consists of spectral parameter vector and its dynamic feature vectors. In the algorithm, we assume that the state sequence (state and mixture sequence for the multi-mixture case) or a part of the state sequence is unobservable (i.e., hidden or latent). As a result, the algorithm iterates the forward-backward algorithm and the parameter generation algorithm for the case where state sequence is given. Experimental results show that by using the algorithm, we can reproduce clear formant structure from multi-mixture HMMs as compared with that produced from single-mixture HMMs.",
"title": ""
},
{
"docid": "pos:1840136_1",
"text": "Feed-forward, Deep neural networks (DNN)-based text-tospeech (TTS) systems have been recently shown to outperform decision-tree clustered context-dependent HMM TTS systems [1, 4]. However, the long time span contextual effect in a speech utterance is still not easy to accommodate, due to the intrinsic, feed-forward nature in DNN-based modeling. Also, to synthesize a smooth speech trajectory, the dynamic features are commonly used to constrain speech parameter trajectory generation in HMM-based TTS [2]. In this paper, Recurrent Neural Networks (RNNs) with Bidirectional Long Short Term Memory (BLSTM) cells are adopted to capture the correlation or co-occurrence information between any two instants in a speech utterance for parametric TTS synthesis. Experimental results show that a hybrid system of DNN and BLSTM-RNN, i.e., lower hidden layers with a feed-forward structure which is cascaded with upper hidden layers with a bidirectional RNN structure of LSTM, can outperform either the conventional, decision tree-based HMM, or a DNN TTS system, both objectively and subjectively. The speech trajectory generated by the BLSTM-RNN TTS is fairly smooth and no dynamic constraints are needed.",
"title": ""
}
] | [
{
"docid": "neg:1840136_0",
"text": "The basic paradigm of asset pricing is in vibrant f lux. The purely rational approach is being subsumed by a broader approach based upon the psychology of investors. In this approach, security expected returns are determined by both risk and misvaluation. This survey sketches a framework for understanding decision biases, evaluates the a priori arguments and the capital market evidence bearing on the importance of investor psychology for security prices, and reviews recent models. The best plan is . . . to profit by the folly of others. — Pliny the Elder, from John Bartlett, comp. Familiar Quotations, 9th ed. 1901. IN THE MUDDLED DAYS BEFORE THE RISE of modern finance, some otherwisereputable economists, such as Adam Smith, Irving Fisher, John Maynard Keynes, and Harry Markowitz, thought that individual psychology affects prices.1 What if the creators of asset-pricing theory had followed this thread? Picture a school of sociologists at the University of Chicago proposing the Deficient Markets Hypothesis: that prices inaccurately ref lect all available information. A brilliant Stanford psychologist, call him Bill Blunte, invents the Deranged Anticipation and Perception Model ~or DAPM!, in which proxies for market misvaluation are used to predict security returns. Imagine the euphoria when researchers discovered that these mispricing proxies ~such * Hirshleifer is from the Fisher College of Business, The Ohio State University. This survey was written for presentation at the American Finance Association Annual Meetings in New Orleans, January, 2001. I especially thank the editor, George Constantinides, for valuable comments and suggestions. I also thank Franklin Allen, the discussant, Nicholas Barberis, Robert Bloomfield, Michael Brennan, Markus Brunnermeier, Joshua Coval, Kent Daniel, Ming Dong, Jack Hirshleifer, Harrison Hong, Soeren Hvidkjaer, Ravi Jagannathan, Narasimhan Jegadeesh, Andrew Karolyi, Charles Lee, Seongyeon Lim, Deborah Lucas, Rajnish Mehra, Norbert Schwarz, Jayanta Sen, Tyler Shumway, René Stulz, Avanidhar Subrahmanyam, Siew Hong Teoh, Sheridan Titman, Yue Wang, Ivo Welch, and participants of the Dice Finance Seminar at Ohio State University for very helpful discussions and comments. 1 Smith analyzed how the “overweening conceit” of mankind caused labor to be underpriced in more enterprising pursuits. Young workers do not arbitrage away pay differentials because they are prone to overestimate their ability to succeed. Fisher wrote a book on money illusion; in The Theory of Interest ~~1930!, pp. 493–494! he argued that nominal interest rates systematically fail to adjust sufficiently for inf lation, and explained savings behavior in relation to self-control, foresight, and habits. Keynes ~1936! famously commented on animal spirits in stock markets. Markowitz ~1952! proposed that people focus on gains and losses relative to reference points, and that this helps explain the pricing of insurance and lotteries. THE JOURNAL OF FINANCE • VOL. LVI, NO. 4 • AUGUST 2001",
"title": ""
},
{
"docid": "neg:1840136_1",
"text": "Trunk movements in the frontal and sagittal planes were studied in 10 healthy males (18-35 yrs) during normal walking (1.0-2.5 m/s) and running (2.0-6.0 m/s) on a treadmill. Movements were recorded with a Selspot optoelectronic system. Directions, amplitudes and phase relationships to the stride cycle (defined by the leg movements) were analyzed for both linear and angular displacements. During one stride cycle the trunk displayed two oscillations in the vertical (mean net amplitude 2.5-9.5 cm) and horizontal, forward-backward directions (mean net amplitude 0.5-3 cm) and one oscillation in the lateral, side to side direction (mean net amplitude 2-6 cm). The magnitude and timing of the various oscillations varied in a different way with speed and mode of progression. Differences in amplitudes and timing of the movements at separate levels along the spine gave rise to angular oscillations with a similar periodicity as the linear displacements in both planes studied. The net angular trunk tilting in the frontal plane increased with speed from 3-10 degrees. The net forward-backward trunk inclination showed a small increase with speed up to 5 degrees in fast running. The mean forward inclination of the trunk increased from 6 degrees to about 13 degrees with speed. Peak inclination to one side occurred during the support phase of the leg on the same side. Peak forward inclination was reached at the initiation of the support phase in walking, whereas in running the peak inclination was in the opposite direction at this point. The adaptations of trunk movements to speed and mode of progression could be related to changing mechanical conditions and different demands on equilibrium control due to e.g. changes in support phase duration and leg movements.",
"title": ""
},
{
"docid": "neg:1840136_2",
"text": "The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information.",
"title": ""
},
{
"docid": "neg:1840136_3",
"text": "Rigid robotic manipulators employ traditional sensors such as encoders or potentiometers to measure joint angles and determine end-effector position. Manipulators that are flexible, however, introduce motions that are much more difficult to measure. This is especially true for continuum manipulators that articulate by means of material compliance. In this paper, we present a vision based system for quantifying the 3-D shape of a flexible manipulator in real-time. The sensor system is validated for accuracy with known point measurements and for precision by estimating a known 3-D shape. We present two applications of the validated system relating to the open-loop control of a tendon driven continuum manipulator. In the first application, we present a new continuum manipulator model and use the sensor to quantify 3-D performance. In the second application, we use the shape sensor system for model parameter estimation in the absence of tendon tension information.",
"title": ""
},
{
"docid": "neg:1840136_4",
"text": "Although single dialyzer use and reuse by chemical reprocessing are both associated with some complications, there is no definitive advantage to either in this respect. Some complications occur mainly at the first use of a dialyzer: a new cellophane or cuprophane membrane may activate the complement system, or a noxious agent may be introduced to the dialyzer during production or generated during storage. These agents may not be completely removed during the routine rinsing procedure. The reuse of dialyzers is associated with environmental contamination, allergic reactions, residual chemical infusion (rebound release), inadequate concentration of disinfectants, and pyrogen reactions. Bleach used during reprocessing causes a progressive increase in dialyzer permeability to larger molecules, including albumin. Reprocessing methods without the use of bleach are associated with progressive decreases in membrane permeability, particularly to larger molecules. Most comparative studies have not shown differences in mortality between centers reusing and those not reusing dialyzers, however, the largest cluster of dialysis-related deaths occurred with single-use dialyzers due to the presence of perfluorohydrocarbon introduced during the manufacturing process and not completely removed during preparation of the dialyzers before the dialysis procedure. The cost savings associated with reuse is substantial, especially with more expensive, high-flux synthetic membrane dialyzers. With reuse, some dialysis centers can afford to utilize more efficient dialyzers that are more expensive; consequently they provide a higher dose of dialysis and reduce mortality. Some studies have shown minimally higher morbidity with chemical reuse, depending on the method. Waste disposal is definitely decreased with the reuse of dialyzers, thus environmental impacts are lessened, particularly if reprocessing is done by heat disinfection. It is safe to predict that dialyzer reuse in dialysis centers will continue because it also saves money for the providers. Saving both time for the patient and money for the provider were the main motivations to design a new machine for daily home hemodialysis. The machine, developed in the 1990s, cleans and heat disinfects the dialyzer and lines in situ so they do not need to be changed for a month. In contrast, reuse of dialyzers in home hemodialysis patients treated with other hemodialysis machines is becoming less popular and is almost extinct.",
"title": ""
},
{
"docid": "neg:1840136_5",
"text": "We describe a batch method that uses a sizeable fraction of the training set at each iteration, and that employs secondorder information. • To improve the learning process, we follow a multi-batch approach in which the batch changes at each iteration. • This inherently gives the algorithm a stochastic flavor that can cause instability in L-BFGS. • We show how to perform stable quasi-Newton updating in the multi-batch setting, illustrate the behavior of the algorithm in a distributed computing platform, and study its convergence properties for both the convex and nonconvex cases. Introduction min w∈Rd F (w) = 1 n n ∑ i=1 f (w;x, y) Idea: select a sizeable sample Sk ⊂ {1, . . . , n} at every iteration and perform quasi-Newton steps 1. Distributed computing setting: distributed gradient computation (with faults) 2. Multi-Batch setting: samples are changed at every iteration to accelerate learning Goal: show that stable quasi-Newton updating can be achieved in both settings without incurring extra computational cost, or special synchronization Issue: samples used at the beginning and at the end of every iteration are different • potentially harmful for quasi-Newton methods Key: controlled sampling • consecutive samples overlap Sk ∩ Sk+1 = Ok 6= ∅ • gradient differences based on this overlap – stable quasi-Newton updates Multi-Batch L-BFGS Method At the k-th iteration: • sample Sk ⊂ {1, . . . , n} chosen, and iterates updated via wk+1 = wk − αkHkgk k where gk k is the batch gradient g Sk k = 1 |Sk| ∑ i∈Sk ∇f ( wk;x , y ) and Hk is the inverse BFGS Hessian approximation Hk+1 =V T k HkVk + ρksks T k ρk = 1 yT k sk , Vk = 1− ρkyksk • to ensure consistent curvature pair updates sk+1 = wk+1 − wk, yk+1 = gk k+1 − g Ok k where gk k+1 and g Ok k are gradients based on the overlapping samples only Ok = Sk ∩ Sk+1 Sample selection:",
"title": ""
},
{
"docid": "neg:1840136_6",
"text": "Alongside developing systems for scalable machine learning and collaborative data science activities, there is an increasing trend toward publicly shared data science projects, hosted in general or dedicated hosting services, such as GitHub and DataHub. The artifacts of the hosted projects are rich and include not only text files, but also versioned datasets, trained models, project documents, etc. Under the fast pace and expectation of data science activities, model discovery, i.e., finding relevant data science projects to reuse, is an important task in the context of data management for end-to-end machine learning. In this paper, we study the task and present the ongoing work on ModelHub Discovery, a system for finding relevant models in hosted data science projects. Instead of prescribing a structured data model for data science projects, we take an information retrieval approach by decomposing the discovery task into three major steps: project query and matching, model comparison and ranking, and processing and building ensembles with returned models. We describe the motivation and desiderata, propose techniques, and present opportunities and challenges for model discovery for hosted data science projects.",
"title": ""
},
{
"docid": "neg:1840136_7",
"text": "After about a decade of intense research, spurred by both economic and operational considerations, and by environmental concerns, energy efficiency has now become a key pillar in the design of communication networks. With the advent of the fifth generation of wireless networks, with millions more base stations and billions of connected devices, the need for energy-efficient system design and operation will be even more compelling. This survey provides an overview of energy-efficient wireless communications, reviews seminal and recent contribution to the state-of-the-art, including the papers published in this special issue, and discusses the most relevant research challenges to be addressed in the future.",
"title": ""
},
{
"docid": "neg:1840136_8",
"text": "Ground vehicles equipped with monocular vision systems are a valuable source of high resolution image data for precision agriculture applications in orchards. This paper presents an image processing framework for fruit detection and counting using orchard image data. A general purpose image segmentation approach is used, including two feature learning algorithms; multi-scale Multi-Layered Perceptrons (MLP) and Convolutional Neural Networks (CNN). These networks were extended by including contextual information about how the image data was captured (metadata), which correlates with some of the appearance variations and/or class distributions observed in the data. The pixel-wise fruit segmentation output is processed using the Watershed Segmentation (WS) and Circular Hough Transform (CHT) algorithms to detect and count individual fruits. Experiments were conducted in a commercial apple orchard near Melbourne, Australia. The results show an improvement in fruit segmentation performance with the inclusion of metadata on the previously benchmarked MLP network. We extend this work with CNNs, bringing agrovision closer to the state-of-the-art in computer vision, where although metadata had negligible influence, the best pixel-wise F1-score of 0.791 was achieved. The WS algorithm produced the best apple detection and counting results, with a detection F1-score of 0.858. As a final step, image fruit counts were accumulated over multiple rows at the orchard and compared against the post-harvest fruit counts that were obtained from a grading and counting machine. The count estimates using CNN and WS resulted in the best performance for this dataset, with a squared correlation coefficient of r = 0.826.",
"title": ""
},
{
"docid": "neg:1840136_9",
"text": "A challenge in teaching usability engineering is providing appropriate hands-on project experience. Students need projects that are realistic enough to address meaningful issues, but manageable within one semester. We describe our use of online case studies to motivate and model course projects in usability engineering. The cases illustrate scenario-based usability methods, and are accessed via a custom browser. We summarize the content and organization of the case studies, several case-based learning activities, and students' reactions to the activities. We conclude with a discussion of future directions for case studies in HCI education.",
"title": ""
},
{
"docid": "neg:1840136_10",
"text": "We present a cognitively plausible novel framework capable of learning the grounding in visual semantics and the grammar of natural language commands given to a robot in a table top environment. The input to the system consists of video clips of a manually controlled robot arm, paired with natural language commands describing the action. No prior knowledge is assumed about the meaning of words, or the structure of the language, except that there are different classes of words (corresponding to observable actions, spatial relations, and objects and their observable properties). The learning process automatically clusters the continuous perceptual spaces into concepts corresponding to linguistic input. A novel relational graph representation is used to build connections between language and vision. As well as the grounding of language to perception, the system also induces a set of probabilistic grammar rules. The knowledge learned is used to parse new commands involving previously unseen objects.",
"title": ""
},
{
"docid": "neg:1840136_11",
"text": "Hash functions play an important role in modern cryptography. This paper investigates optimisation techniques that have recently been proposed in the literature. A new VLSI architecture for the SHA-256 and SHA-512 hash functions is presented, which combines two popular hardware optimisation techniques, namely pipelining and unrolling. The SHA processors are developed for implementation on FPGAs, thereby allowing rapid prototyping of several designs. Speed/area results from these processors are analysed and are shown to compare favourably with other FPGA-based implementations, achieving the fastest data throughputs in the literature to date",
"title": ""
},
{
"docid": "neg:1840136_12",
"text": "This paper presents a bi-directional converter applied in electric bike. The main structure is a cascade buck-boost converter, which transfers the energy stored in battery for driving motor, and can recycle the energy resulted from the back electromotive force (BEMF) to charge battery by changing the operation mode. Moreover, the proposed converter can also serve as a charger by connecting with AC line directly. Besides, the single-chip DSP TMS320F2812 is adopted as a control core to manage the switching behaviors of each mode and to detect the battery capacity. In this paper, the equivalent models of each mode and complete design considerations are all detailed. All the experimental results are used to demonstrate the feasibility.",
"title": ""
},
{
"docid": "neg:1840136_13",
"text": "This note evaluates several hardware platforms and operating systems using a set of benchmarks that test memory bandwidth and various operating system features such as kernel entry/exit and file systems. The overall conclusion is that operating system performance does not seem to be improving at the same rate as the base speed of the underlying hardware. Copyright 1989 Digital Equipment Corporation d i g i t a l Western Research Laboratory 100 Hamilton Avenue Palo Alto, California 94301 USA",
"title": ""
},
{
"docid": "neg:1840136_14",
"text": "The design and manufacturing of pop-up books are mainly manual at present, but a number of the processes therein can benefit from computerization and automation. This paper studies one aspect of the design of pop-up books: the mathematical modelling and simulation of the pieces popping up as a book is opened. It developes the formulae for the essential parameters in the pop-up animation. This animation enables the designer to determine on a computer if a particular set-up is appropriate to the theme which the page is designed to express, removing the need for the laborious and time-consuming task of making manual prototypes",
"title": ""
},
{
"docid": "neg:1840136_15",
"text": "Activity theory holds that the human mind is the product of our interaction with people and artifacts in the context of everyday activity. Acting with Technology makes the case for activity theory as a basis for...",
"title": ""
},
{
"docid": "neg:1840136_16",
"text": "We describe the first sub-quadratic sampling algorithm for the Multiplicative Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close connection between MAGM and the Kronecker Product Graph Model (KPGM) of Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices to sample small number of KPGM graphs and quilt them together. Under a restricted set of technical conditions our algorithm runs in O ( (log2(n)) 3 |E| ) time, where n is the number of nodes and |E| is the number of edges in the sampled graph. We demonstrate the scalability of our algorithm via extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes and 20 billion edges in under 6 hours.",
"title": ""
},
{
"docid": "neg:1840136_17",
"text": "A DC-DC buck converter capable of handling loads from 20 μA to 100 mA and operating off a 2.8-4.2 V battery is implemented in a 45 nm CMOS process. In order to handle high battery voltages in this deeply scaled technology, multiple transistors are stacked in the power train. Switched-Capacitor DC-DC converters are used for internal rail generation for stacking and supplies for control circuits. An I-C DAC pulse width modulator with sleep mode control is proposed which is both area and power-efficient as compared with previously published pulse width modulator schemes. Both pulse frequency modulation (PFM) and pulse width modulation (PWM) modes of control are employed for the wide load range. The converter achieves a peak efficiency of 75% at 20 μA, 87.4% at 12 mA in PFM, and 87.2% at 53 mA in PWM.",
"title": ""
},
{
"docid": "neg:1840136_18",
"text": "This paper presents a new permanent-magnet gear based on the cycloid gearing principle, which normally is characterized by an extreme torque density and a very high gearing ratio. An initial design of the proposed magnetic gear was designed, analyzed, and optimized with an analytical model regarding torque density. The results were promising as compared to other high-performance magnetic-gear designs. A test model was constructed to verify the analytical model.",
"title": ""
},
{
"docid": "neg:1840136_19",
"text": "This study aimed to verify whether achieving a dist inctive academic performance is unlikely for students at high risk of smartphone addiction. Additionally, it verified whether this phenomenon was equally applicable to male and femal e students. After implementing systematic random sampling, 293 university students participated by completing an online survey questionnaire posted on the university’s stu dent information system. The survey questionnaire collected demographic information and responses to the Smartphone Addiction Scale-Short Version (SAS-SV) items. The results sho wed that male and female university students were equally susceptible to smartphone add iction. Additionally, male and female university students were equal in achieving cumulat ive GPAs with distinction or higher within the same levels of smartphone addiction. Fur thermore, undergraduate students who were at a high risk of smartphone addiction were le ss likely to achieve cumulative GPAs of distinction or higher.",
"title": ""
}
] |
1840137 | Development of a Mobile Robot Test Platform and Methods for Validation of Prognostics-Enabled Decision Making Algorithms | [
{
"docid": "pos:1840137_0",
"text": "This paper presents an empirical model to describe battery behavior during individual discharge cycles as well as over its cycle life. The basis for the form of the model has been linked to the internal processes of the battery and validated using experimental data. Subsequently, the model has been used in a Particle Filtering framework to make predictions of remaining useful life for individual discharge cycles as well as for cycle life. The prediction performance was found to be satisfactory as measured by performance metrics customized for prognostics. The work presented here provides initial steps towards a comprehensive health management solution for energy storage devices. *",
"title": ""
}
] | [
{
"docid": "neg:1840137_0",
"text": "We uses the backslide procedure to determine the Noteworthiness Score of the sentences in a paper and then uses the Integer Linear Programming Algorithms system to create well-organized slides by selecting and adjusting key expressions and sentences. Evaluated result based on a certain set of 200 arrangements of papers and slide assemble on the web displays in our proposed structure of PPSGen can create slides with better quality and quick. Paper talks about a technique for consequently getting outline slides from a content, contemplating the programmed era of presentation slides from a specialized paper and also examines the challenging task of continuous creating presentation slides from academic paper. The created slide can be used as a draft to help moderator setup their systematic slides in a quick manners. This paper introduces novel systems called PPSGen to help moderators create such slide. A customer study also exhibits that PPSGen has obvious advantage over baseline method and speed is fast for creations. . Keyword : Artificial Support Vector Regression (SVR), Integer Linear Programming (ILP), Abstract methods, texts mining, Classification etc....",
"title": ""
},
{
"docid": "neg:1840137_1",
"text": "This chapter provides information on commonly used equipment in industrial mammalian cell culture, with an emphasis on bioreactors. The actual equipment used in the cell culture process can vary from one company to another, but the main steps remain the same. The process involves expansion of cells in seed train and inoculation train processes followed by cultivation of cells in a production bioreactor. Process and equipment options for each stage of the cell culture process are introduced and examples are provided. Finally, the use of disposables during seed train and cell culture production is discussed.",
"title": ""
},
{
"docid": "neg:1840137_2",
"text": "Monitoring aquatic environment is of great interest to the ecosystem, marine life, and human health. This paper presents the design and implementation of Samba -- an aquatic surveillance robot that integrates an off-the-shelf Android smartphone and a robotic fish to monitor harmful aquatic processes such as oil spill and harmful algal blooms. Using the built-in camera of on-board smartphone, Samba can detect spatially dispersed aquatic processes in dynamic and complex environments. To reduce the excessive false alarms caused by the non-water area (e.g., trees on the shore), Samba segments the captured images and performs target detection in the identified water area only. However, a major challenge in the design of Samba is the high energy consumption resulted from the continuous image segmentation. We propose a novel approach that leverages the power-efficient inertial sensors on smartphone to assist the image processing. In particular, based on the learned mapping models between inertial and visual features, Samba uses real-time inertial sensor readings to estimate the visual features that guide the image segmentation, significantly reducing energy consumption and computation overhead. Samba also features a set of lightweight and robust computer vision algorithms, which detect harmful aquatic processes based on their distinctive color features. Lastly, Samba employs a feedback-based rotation control algorithm to adapt to spatiotemporal evolution of the target aquatic process. We have implemented a Samba prototype and evaluated it through extensive field experiments, lab experiments, and trace-driven simulations. The results show that Samba can achieve 94% detection rate, 5% false alarm rate, and a lifetime up to nearly two months.",
"title": ""
},
{
"docid": "neg:1840137_3",
"text": "-This paper introduces a new image thresholding method based on minimizing the measures of fuzziness of an input image. The membership function in the thresholding method is used to denote the characteristic relationship between a pixel and its belonging region (the object or the background). In addition, based on the measure of fuzziness, a fuzzy range is defined to find the adequate threshold value within this range. The principle of the method is easy to understand and it can be directly extended to multilevel thresholding. The effectiveness of the new method is illustrated by using the test images of having various types of histograms. The experimental results indicate that the proposed method has demonstrated good performance in bilevel and trilevel thresholding. Image thresholding Measure of fuzziness Fuzzy membership function I. I N T R O D U C T I O N Image thresholding which extracts the object from the background in an input image is one of the most common applications in image analysis. For example, in automatic recognition of machine printed or handwritten texts, in shape recognition of objects, and in image enhancement, thresholding is a necessary step for image preprocessing. Among the image thresholding methods, bilevel thresholding separates the pixels of an image into two regions (i.e. the object and the background); one region contains pixels with gray values smaller than the threshold value and the other contains pixels with gray values larger than the threshold value. Further, if the pixels of an image are divided into more than two regions, this is called multilevel thresholding. In general, the threshold is located at the obvious and deep valley of the histogram. However, when the valley is not so obvious, it is very difficult to determine the threshold. During the past decade, many research studies have been devoted to the problem of selecting the appropriate threshold value. The survey of these papers can be seen in the literature31-3) Fuzzy set theory has been applied to image thresholding to partition the image space into meaningful regions by minimizing the measure of fuzziness of the image. The measurement can be expressed by terms such as entropy, {4) index of fuzziness, ~5) and index of nonfuzziness36) The \"ent ropy\" involves using Shannon's function to measure the fuzziness of an image so that the threshold can be determined by minimizing the entropy measure. It is very different from the classical entropy measure which measures t Author to whom correspondence should be addressed. probabil ist ic information. The index of fuzziness represents the average amount of fuzziness in an image by measuring the distance between the gray-level image and its near crisp (binary) version. The index of nonfuzziness indicates the average amount of nonfuzziness (crispness) in an image by taking the absolute difference between the crisp version and its complement. In addition, Pal and Rosenfeld ~7) developed an algorithm based on minimizing the compactness of fuzziness to obtain the fuzzy and nonfuzzy versions of an ill-defined image such that the appropriate nonfuzzy threshold can be chosen. They used some fuzzy geometric properties, i.e. the area and the perimeter of an fuzzy image, to obtain the measure of compactness. The effectiveness of the method has been illustrated by using two input images of bimodal and unimodal histograms. Another measurement, which is called the index of area converge (IOAC), ts) has been applied to select the threshold by finding the local minima of the IOAC. Since both the measures of compactness and the IOAC involve the spatial information of an image, they need a long time to compute the perimeter of the fuzzy plane. In this paper, based on the concept of fuzzy set, an effective thresholding method is proposed. Given a certain threshold value, the membership function of a pixel is defined by the absolute difference between the gray level and the average gray level of its belonging region (i.e. the object or the background). The larger the absolute difference is, the smaller the membership value becomes. It is expected that the membership value of each pixel in the input image is as large as possible. In addition, two measures of fuzziness are proposed to indicate the fuzziness of an image. The optimal threshold can then be effectively determined by minimizing the measure of fuzziness of an image. The performance of the proposed approach is compared",
"title": ""
},
{
"docid": "neg:1840137_4",
"text": "We propose a general object localization and retrieval scheme based on object shape using deformable templates. Prior knowledge of an object shape is described by a prototype template which consists of the representative contour/edges, and a set of probabilistic deformation transformations on the template. A Bayesian scheme, which is based on this prior knowledge and the edge information in the input image, is employed to find a match between the deformed template and objects in the image. Computational efficiency is achieved via a coarse-to-fine implementation of the matching algorithm. Our method has been applied to retrieve objects with a variety of shapes from images with complex background. The proposed scheme is invariant to location, rotation, and moderate scale changes of the template.",
"title": ""
},
{
"docid": "neg:1840137_5",
"text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.",
"title": ""
},
{
"docid": "neg:1840137_6",
"text": "Recent works have been shown effective in using neural networks for Chinese word segmentation. However, these models rely on large-scale data and are less effective for low-resource datasets because of insufficient training data. Thus, we propose a transfer learning method to improve low-resource word segmentation by leveraging high-resource corpora. First, we train a teacher model on high-resource corpora and then use the learned knowledge to initialize a student model. Second, a weighted data similarity method is proposed to train the student model on low-resource data with the help of highresource corpora. Finally, given that insufficient data puts forward higher requirements for feature extraction, we propose a novel neural network which improves feature learning. Experiment results show that our work significantly improves the performance on low-resource datasets: 2.3% and 1.5% F-score on PKU and CTB datasets. Furthermore, this paper achieves state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets1. Besides, we explore an asynchronous parallel method on neural word segmentation to speed up training. The parallel method accelerates training substantially and is almost five times faster than a serial mode.",
"title": ""
},
{
"docid": "neg:1840137_7",
"text": "Given e-commerce scenarios that user profiles are invisible, session-based recommendation is proposed to generate recommendation results from short sessions. Previous work only considers the user's sequential behavior in the current session, whereas the user's main purpose in the current session is not emphasized. In this paper, we propose a novel neural networks framework, i.e., Neural Attentive Recommendation Machine (NARM), to tackle this problem. Specifically, we explore a hybrid encoder with an attention mechanism to model the user's sequential behavior and capture the user's main purpose in the current session, which are combined as a unified session representation later. We then compute the recommendation scores for each candidate item with a bi-linear matching scheme based on this unified session representation. We train NARM by jointly learning the item and session representations as well as their matchings. We carried out extensive experiments on two benchmark datasets. Our experimental results show that NARM outperforms state-of-the-art baselines on both datasets. Furthermore, we also find that NARM achieves a significant improvement on long sessions, which demonstrates its advantages in modeling the user's sequential behavior and main purpose simultaneously.",
"title": ""
},
{
"docid": "neg:1840137_8",
"text": "In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?",
"title": ""
},
{
"docid": "neg:1840137_9",
"text": "Purpose – Customer relationship management (CRM) is an information system that tracks customers’ interactions with the firm and allows employees to instantly pull up information about the customers such as past sales, service records, outstanding records and unresolved problem calls. This paper aims to put forward strategies for successful implementation of CRM and discusses barriers to CRM in e-business and m-business. Design/methodology/approach – The paper combines narrative with argument and analysis. Findings – CRM stores all information about its customers in a database and uses this data to coordinate sales, marketing, and customer service departments so as to work together smoothly to best serve their customers’ needs. Originality/value – The paper demonstrates how CRM, if used properly, could enhance a company’s ability to achieve the ultimate goal of retaining customers and gain strategic advantage over its competitors.",
"title": ""
},
{
"docid": "neg:1840137_10",
"text": "Last several years, industrial and information technology field have undergone profound changes, entering \"Industry 4.0\" era. Industry4.0, as a representative of the future of the Fourth Industrial Revolution, evolved from embedded system to the Cyber Physical System (CPS). Manufacturing will be via the Internet, to achieve Internal and external network integration, toward the intelligent direction. This paper introduces the development of Industry 4.0, and the Cyber Physical System is introduced with the example of the Wise Information Technology of 120 (WIT120), then the application of Industry 4.0 in intelligent manufacturing is put forward through the digital factory to the intelligent factory. Finally, the future development direction of Industry 4.0 is analyzed, which provides reference for its application in intelligent manufacturing.",
"title": ""
},
{
"docid": "neg:1840137_11",
"text": "The classical Rough Set Theory (RST) always generates too many rules, making it difficult for decision makers to choose a suitable rule. In this study, we use two processes (pre process and post process) to select suitable rules and to explore the relationship among attributes. In pre process, we propose a pruning process to select suitable rules by setting up a threshold on the support object of decision rules, to thereby solve the problem of too many rules. The post process used the formal concept analysis from these suitable rules to explore the attribute relationship and the most important factors affecting decision making for choosing behaviours of personal investment portfolios. In this study, we explored the main concepts (characteristics) for the conservative portfolio: the stable job, less than 4 working years, and the gender is male; the moderate portfolio: high school education, the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), the gender is male; and the aggressive portfolio: the monthly salary between NT$30,001 (US$1000) and NT$80,000 (US$2667), less than 4 working years, and a stable job. The study result successfully explored the most important factors affecting the personal investment portfolios and the suitable rules that can help decision makers. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840137_12",
"text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.",
"title": ""
},
{
"docid": "neg:1840137_13",
"text": "We present an iterative method for solving linear systems, which has the property of minimizing at every step the norm of the residual vector over a Krylov subspace. The algorithm is derived from the Arnoldi process for constructing an /2-orthogonal basis of Krylov subspaces. It can be considered as a generalization of Paige and Saunders' MINRES algorithm and is theoretically equivalent to the Generalized Conjugate Residual (GCR) method and to ORTHODIR. The new algorithm presents several advantages over GCR and ORTHODIR.",
"title": ""
},
{
"docid": "neg:1840137_14",
"text": "Providing force feedback as relevant information in current Robot-Assisted Minimally Invasive Surgery systems constitutes a technological challenge due to the constraints imposed by the surgical environment. In this context, Sensorless Force Estimation techniques represent a potential solution, enabling to sense the interaction forces between the surgical instruments and soft-tissues. Specifically, if visual feedback is available for observing soft-tissues’ deformation, this feedback can be used to estimate the forces applied to these tissues. To this end, a force estimation model, based on Convolutional Neural Networks and Long-Short Term Memory networks, is proposed in this work. This model is designed to process both, the spatiotemporal information present in video sequences and the temporal structure of tool data (the surgical tool-tip trajectory and its grasping status). A series of analyses are carried out to reveal the advantages of the proposal and the challenges that remain for real applications. This research work focuses on two surgical task scenarios, referred to as pushing and pulling tissue. For these two scenarios, different input data modalities and their effect on the force estimation quality are investigated. These input data modalities are tool data, video sequences and a combination of both. The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model. Moreover, this study reveals the need for a loss function, designed to promote the modeling of smooth and sharp details found in force signals. Finally, the results show that the modeling of forces due to pulling tasks is more challenging than for the simplest pushing actions.",
"title": ""
},
{
"docid": "neg:1840137_15",
"text": "Convolutional Neural Networks (CNNs) currently achieve state-of-the-art accuracy in image classification. With a growing number of classes, the accuracy usually drops as the possibilities of confusion increase. Interestingly, the class confusion patterns follow a hierarchical structure over the classes. We present visual-analytics methods to reveal and analyze this hierarchy of similar classes in relation with CNN-internal data. We found that this hierarchy not only dictates the confusion patterns between the classes, it furthermore dictates the learning behavior of CNNs. In particular, the early layers in these networks develop feature detectors that can separate high-level groups of classes quite well, even after a few training epochs. In contrast, the latter layers require substantially more epochs to develop specialized feature detectors that can separate individual classes. We demonstrate how these insights are key to significant improvement in accuracy by designing hierarchy-aware CNNs that accelerate model convergence and alleviate overfitting. We further demonstrate how our methods help in identifying various quality issues in the training data.",
"title": ""
},
{
"docid": "neg:1840137_16",
"text": "Security awareness is an often-overlooked factor in an information security program. While organizations expand their use of advanced security technology and continuously train their security professionals, very little is used to increase the security awareness among the normal users, making them the weakest link in any organization. As a result, today, organized cyber criminals are putting significant efforts to research and develop advanced hacking methods that can be used to steal money and information from the general public. Furthermore, the high internet penetration growth rate in the Middle East and the limited security awareness among users is making it an attractive target for cyber criminals. In this paper, we will show the need for security awareness programs in schools, universities, governments, and private organizations in the Middle East by presenting results of several security awareness studies conducted among students and professionals in UAE in 2010. This includes a comprehensive wireless security survey in which thousands of access points were detected in Dubai and Sharjah most of which are either unprotected or employ weak types of protection. Another study focuses on evaluating the chances of general users to fall victims to phishing attacks which can be used to steal bank and personal information. Furthermore, a study of the user’s awareness of privacy issues when using RFID technology is presented. Finally, we discuss several key factors that are necessary to develop a successful information security awareness program.",
"title": ""
},
{
"docid": "neg:1840137_17",
"text": "A novel method for finding active contours, or snakes as developed by Xu and Prince [1] is presented in this paper. The approach uses a regularization based technique and calculus of variations to find what the authors call a Gradient Vector Field or GVF in binary-values or grayscale images. The GVF is in turn applied to ’pull’ the snake towards the required feature. The approach presented here differs from other snake algorithms in its ability to extend into object concavities and its robust initialization technique. Although their algorithm works better than existing active contour algorithms, it suffers from computational complexity and associated costs in execution, resulting in slow execution time.",
"title": ""
},
{
"docid": "neg:1840137_18",
"text": "RaptorX Property (http://raptorx2.uchicago.edu/StructurePropertyPred/predict/) is a web server predicting structure property of a protein sequence without using any templates. It outperforms other servers, especially for proteins without close homologs in PDB or with very sparse sequence profile (i.e. carries little evolutionary information). This server employs a powerful in-house deep learning model DeepCNF (Deep Convolutional Neural Fields) to predict secondary structure (SS), solvent accessibility (ACC) and disorder regions (DISO). DeepCNF not only models complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent property labels. Our experimental results show that, tested on CASP10, CASP11 and the other benchmarks, this server can obtain ∼84% Q3 accuracy for 3-state SS, ∼72% Q8 accuracy for 8-state SS, ∼66% Q3 accuracy for 3-state solvent accessibility, and ∼0.89 area under the ROC curve (AUC) for disorder prediction.",
"title": ""
},
{
"docid": "neg:1840137_19",
"text": "Adversarial machine learning research has recently demonstrated the feasibility to confuse automatic speech recognition (ASR) models by introducing acoustically imperceptible perturbations to audio samples. To help researchers and practitioners gain better understanding of the impact of such attacks, and to provide them with tools to help them more easily evaluate and craft strong defenses for their models, we present Adagio, the first tool designed to allow interactive experimentation with adversarial attacks and defenses on an ASR model in real time, both visually and aurally. Adagio incorporates AMR and MP3 audio compression techniques as defenses, which users can interactively apply to attacked audio samples. We show that these techniques, which are based on psychoacoustic principles, effectively eliminate targeted attacks, reducing the attack success rate from 92.5% to 0%. We will demonstrate Adagio and invite the audience to try it on the Mozilla Common Voice dataset.",
"title": ""
}
] |
1840138 | HDFI: Hardware-Assisted Data-Flow Isolation | [
{
"docid": "pos:1840138_0",
"text": "Control flow defenses against ROP either use strict, expensive, but strong protection against redirected RET instructions with shadow stacks, or much faster but weaker protections without. In this work we study the inherent overheads of shadow stack schemes. We find that the overhead is roughly 10% for a traditional shadow stack. We then design a new scheme, the parallel shadow stack, and show that its performance cost is significantly less: 3.5%. Our measurements suggest it will not be easy to improve performance on current x86 processors further, due to inherent costs associated with RET and memory load/store instructions. We conclude with a discussion of the design decisions in our shadow stack instrumentation, and possible lighter-weight alternatives.",
"title": ""
},
{
"docid": "pos:1840138_1",
"text": "Systems code is often written in low-level languages like C/C++, which offer many benefits but also delegate memory management to programmers. This invites memory safety bugs that attackers can exploit to divert control flow and compromise the system. Deployed defense mechanisms (e.g., ASLR, DEP) are incomplete, and stronger defense mechanisms (e.g., CFI) often have high overhead and limited guarantees [19, 15, 9]. We introduce code-pointer integrity (CPI), a new design point that guarantees the integrity of all code pointers in a program (e.g., function pointers, saved return addresses) and thereby prevents all control-flow hijack attacks, including return-oriented programming. We also introduce code-pointer separation (CPS), a relaxation of CPI with better performance properties. CPI and CPS offer substantially better security-to-overhead ratios than the state of the art, they are practical (we protect a complete FreeBSD system and over 100 packages like apache and postgresql), effective (prevent all attacks in the RIPE benchmark), and efficient: on SPEC CPU2006, CPS averages 1.2% overhead for C and 1.9% for C/C++, while CPI’s overhead is 2.9% for C and 8.4% for C/C++. A prototype implementation of CPI and CPS can be obtained from http://levee.epfl.ch.",
"title": ""
},
{
"docid": "pos:1840138_2",
"text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.",
"title": ""
}
] | [
{
"docid": "neg:1840138_0",
"text": "We address the problem of unsupervised clustering of multidimensional data when the number of clusters is not known a priori. The proposed iterative approach is a stochastic extension of the kNN density-based clustering (KNNCLUST) method which randomly assigns objects to clusters by sampling a posterior class label distribution. In our approach, contextual class-conditional distributions are estimated based on a k nearest neighbors graph, and are iteratively modified to account for current cluster labeling. Posterior probabilities are also slightly reinforced to accelerate convergence to a stationary labeling. A stopping criterion based on the measure of clustering entropy is defined thanks to the Kozachenko-Leonenko differential entropy estimator, computed from current class-conditional entropies. One major advantage of our approach relies in its ability to provide an estimate of the number of clusters present in the data set. The application of our approach to the clustering of real hyperspectral image data is considered. Our algorithm is compared with other unsupervised clustering approaches, namely affinity propagation (AP), KNNCLUST and Non Parametric Stochastic Expectation Maximization (NPSEM), and is shown to improve the correct classification rate in most experiments.",
"title": ""
},
{
"docid": "neg:1840138_1",
"text": "Page flipping is an important part of paper-based document navigation. However this affordance of paper document has not been fully transferred to digital documents. In this paper we present Flipper, a new digital document navigation technique inspired by paper document flipping. Flipper combines speed-dependent automatic zooming (SDAZ) [6] and rapid serial visual presentation (RSVP) [3], to let users navigate through documents at a wide range of speeds. It is particularly well adapted to rapid visual search. User studies show Flipper is faster than both conventional scrolling and SDAZ and is well received by users.",
"title": ""
},
{
"docid": "neg:1840138_2",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and speed are also important concerns. These are the two main characteristics that differentiate one encryption algorithm from another. This paper provides the performance comparison between four of the most commonly used encryption algorithms: DES(Data Encryption Standard), 3DES(Triple DES), BLOWFISH and AES (Rijndael). The comparison has been conducted by running several setting to process different sizes of data blocks to evaluate the algorithms encryption and decryption speed. Based on the performance analysis of these algorithms under different hardware and software platform, it has been concluded that the Blowfish is the best performing algorithm among the algorithms under the security against unauthorized attack and the speed is taken into consideration.",
"title": ""
},
{
"docid": "neg:1840138_3",
"text": "The increasing influence of social media and enormous participation of users creates new opportunities to study human social behavior along with the capability to analyze large amount of data streams. One of the interesting problems is to distinguish between different kinds of users, for example users who are leaders and introduce new issues and discussions on social media. Furthermore, positive or negative attitudes can also be inferred from those discussions. Such problems require a formal interpretation of social media logs and unit of information that can spread from person to person through the social network. Once the social media data such as user messages are parsed and network relationships are identified, data mining techniques can be applied to group different types of communities. However, the appropriate granularity of user communities and their behavior is hardly captured by existing methods. In this paper, we present a framework for the novel task of detecting communities by clustering messages from large streams of social data. Our framework uses K-Means clustering algorithm along with Genetic algorithm and Optimized Cluster Distance (OCD) method to cluster data. The goal of our proposed framework is twofold that is to overcome the problem of general K-Means for choosing best initial centroids using Genetic algorithm, as well as to maximize the distance between clusters by pairwise clustering using OCD to get an accurate clusters. We used various cluster validation metrics to evaluate the performance of our algorithm. The analysis shows that the proposed method gives better clustering results and provides a novel use-case of grouping user communities based on their activities. Our approach is optimized and scalable for real-time clustering of social media data.",
"title": ""
},
{
"docid": "neg:1840138_4",
"text": "The goal of this research is to find the efficient and most widely used cryptographic algorithms form the history, investigating one of its merits and demerits which have not been modified so far. Perception of cryptography, its techniques such as transposition & substitution and Steganography were discussed. Our main focus is on the Playfair Cipher, its advantages and disadvantages. Finally, we have proposed a few methods to enhance the playfair cipher for more secure and efficient cryptography.",
"title": ""
},
{
"docid": "neg:1840138_5",
"text": "A cost-effective position measurement system based on optical mouse sensors is presented in this work. The system is intended to be used in a planar positioning stage for microscopy applications and as such, has strict resolution, accuracy, repeatability, and sensitivity requirements. Three techniques which improve the measurement system's performance in the context of these requirements are proposed; namely, an optical magnification of the image projected onto the mouse sensor, a periodic homing procedure to reset the error buildup, and a compensation of the undesired dynamics caused by filters implemented in the mouse sensor chip.",
"title": ""
},
{
"docid": "neg:1840138_6",
"text": "An improved firefly algorithm (FA)-based band selection method is proposed for hyperspectral dimensionality reduction (DR). In this letter, DR is formulated as an optimization problem that searches a small number of bands from a hyperspectral data set, and a feature subset search algorithm using the FA is developed. To avoid employing an actual classifier within the band searching process to greatly reduce computational cost, criterion functions that can gauge class separability are preferred; specifically, the minimum estimated abundance covariance and Jeffreys-Matusita distances are employed. The proposed band selection technique is compared with an FA-based method that actually employs a classifier, the well-known sequential forward selection, and particle swarm optimization algorithms. Experimental results show that the proposed algorithm outperforms others, providing an effective option for DR.",
"title": ""
},
{
"docid": "neg:1840138_7",
"text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.",
"title": ""
},
{
"docid": "neg:1840138_8",
"text": "The main purpose of this research is to design and develop complete system of a remote-operated multi-direction Unmanned Ground Vehicle (UGV). The development involved PIC microcontroller in remote-controlled and UGV robot, Xbee Pro modules, Graphic LCD 84×84, Vexta brushless DC electric motor and mecanum wheels. This paper show the study the movement of multidirectional UGV by using Mecanum wheels with differences drive configuration. The 16-bits Microchips microcontroller were used in the UGV's system that embed with Xbee Pro through variable baud-rate value via UART protocol and control the direction of wheels. The successful develop UGV demonstrated clearly the potential application of this type of vehicle, and incorporated the necessary technology for further research of this type of vehicle.",
"title": ""
},
{
"docid": "neg:1840138_9",
"text": "We tackle image question answering (ImageQA) problem by learning a convolutional neural network (CNN) with a dynamic parameter layer whose weights are determined adaptively based on questions. For the adaptive parameter prediction, we employ a separate parameter prediction network, which consists of gated recurrent unit (GRU) taking a question as its input and a fully-connected layer generating a set of candidate weights as its output. However, it is challenging to construct a parameter prediction network for a large number of parameters in the fully-connected dynamic parameter layer of the CNN. We reduce the complexity of this problem by incorporating a hashing technique, where the candidate weights given by the parameter prediction network are selected using a predefined hash function to determine individual weights in the dynamic parameter layer. The proposed network-joint network with the CNN for ImageQA and the parameter prediction network-is trained end-to-end through back-propagation, where its weights are initialized using a pre-trained CNN and GRU. The proposed algorithm illustrates the state-of-the-art performance on all available public ImageQA benchmarks.",
"title": ""
},
{
"docid": "neg:1840138_10",
"text": "Save for some special cases, current training methods for Generative Adversarial Networks (GANs) are at best guaranteed to converge to a ‘local Nash equilibrium’ (LNE). Such LNEs, however, can be arbitrarily far from an actual Nash equilibrium (NE), which implies that there are no guarantees on the quality of the found generator or classifier. This paper proposes to model GANs explicitly as finite games in mixed strategies, thereby ensuring that every LNE is an NE. We use the Parallel Nash Memory as a solution method, which is proven to monotonically converge to a resource-bounded Nash equilibrium. We empirically demonstrate that our method is less prone to typical GAN problems such as mode collapse and produces solutions that are less exploitable than those produced by GANs and MGANs.",
"title": ""
},
{
"docid": "neg:1840138_11",
"text": "Social transmission is everywhere. Friends talk about restaurants , policy wonks rant about legislation, analysts trade stock tips, neighbors gossip, and teens chitchat. Further, such interpersonal communication affects everything from decision making and well-But although it is clear that social transmission is both frequent and important, what drives people to share, and why are some stories and information shared more than others? Traditionally, researchers have argued that rumors spread in the \" 3 Cs \" —times of conflict, crisis, and catastrophe (e.g., wars or natural disasters; Koenig, 1985)―and the major explanation for this phenomenon has been generalized anxiety (i.e., apprehension about negative outcomes). Such theories can explain why rumors flourish in times of panic, but they are less useful in explaining the prevalence of rumors in positive situations, such as the Cannes Film Festival or the dot-com boom. Further, although recent work on the social sharing of emotion suggests that positive emotion may also increase transmission, why emotions drive sharing and why some emotions boost sharing more than others remains unclear. I suggest that transmission is driven in part by arousal. Physiological arousal is characterized by activation of the autonomic nervous system (Heilman, 1997), and the mobilization provided by this excitatory state may boost sharing. This hypothesis not only suggests why content that evokes more of certain emotions (e.g., disgust) may be shared more than other a review), but also suggests a more precise prediction , namely, that emotions characterized by high arousal, such as anxiety or amusement (Gross & Levenson, 1995), will boost sharing more than emotions characterized by low arousal, such as sadness or contentment. This idea was tested in two experiments. They examined how manipulations that increase general arousal (i.e., watching emotional videos or jogging in place) affect the social transmission of unrelated content (e.g., a neutral news article). If arousal increases transmission, even incidental arousal (i.e., outside the focal content being shared) should spill over and boost sharing. In the first experiment, 93 students completed what they were told were two unrelated studies. The first evoked specific emotions by using film clips validated in prior research (Christie & Friedman, 2004; Gross & Levenson, 1995). Participants in the control condition watched a neutral clip; those in the experimental conditions watched an emotional clip. Emotional arousal and valence were manipulated independently so that high-and low-arousal emotions of both a positive (amusement vs. contentment) and a negative (anxiety vs. …",
"title": ""
},
{
"docid": "neg:1840138_12",
"text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.",
"title": ""
},
{
"docid": "neg:1840138_13",
"text": "Determining the polarity of a sentimentbearing expression requires more than a simple bag-of-words approach. In particular, words or constituents within the expression can interact with each other to yield a particular overall polarity. In this paper, we view such subsentential interactions in light of compositional semantics, and present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedure. Our experiments show that (1) simple heuristics based on compositional semantics can perform better than learning-based methods that do not incorporate compositional semantics (accuracy of 89.7% vs. 89.1%), but (2) a method that integrates compositional semantics into learning performs better than all other alternatives (90.7%). We also find that “contentword negators”, not widely employed in previous work, play an important role in determining expression-level polarity. Finally, in contrast to conventional wisdom, we find that expression-level classification accuracy uniformly decreases as additional, potentially disambiguating, context is considered.",
"title": ""
},
{
"docid": "neg:1840138_14",
"text": "Omega-3 polyunsaturated fatty acids such as eicosapentaenoic acid and docosahexaenoic acid have beneficial effects in many inflammatory disorders. Although the mechanism of eicosapentaenoic acid and docosahexaenoic acid action is still not fully defined in molecular terms, recent studies have revealed that, during the course of acute inflammation, omega-3 polyunsaturated fatty acid-derived anti-inflammatory mediators including resolvins and protectins are produced. This review presents recent advances in understanding the formation and action of these mediators, especially focusing on the LC-MS/MS-based lipidomics approach and recently identified bioactive products with potent anti-inflammatory property.",
"title": ""
},
{
"docid": "neg:1840138_15",
"text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.",
"title": ""
},
{
"docid": "neg:1840138_16",
"text": "This study examined age differences in perceptions of online communities held by people who were not yet participating in these relatively new social spaces. Using the Technology Acceptance Model (TAM), we investigated the factors that affect future intention to participate in online communities. Our results supported the proposition that perceived usefulness positively affects behavioral intention, yet it was determined that perceived ease of use was not a significant predictor of perceived usefulness. The study also discovered negative relationships between age and Internet self-efficacy and the perceived quality of online community websites. However, the moderating role of age was not found. The findings suggest that the relationships among perceived ease of use, perceived usefulness, and intention to participate in online communities do not change with age. Theoretical and practical implications and limitations were discussed. ! 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840138_17",
"text": "The importance of the maintenance function has increased because of its role in keeping and improving system availability and safety, as well as product quality. To support this role, the development of the communication and information technologies has allowed the emergence of the concept of e-maintenance. Within the era of e-manufacturing and e-business, e-maintenance provides the opportunity for a new maintenance generation. As we will discuss later in this paper, e-maintenance integrates existing telemaintenance principles, with Web services and modern e-collaboration principles. Collaboration allows to share and exchange not only information but also knowledge and (e)-intelligence. By means of a collaborative environment, pertinent knowledge and intelligence become available and usable at the right place and time, in order to facilitate reaching the best maintenance decisions. This paper outlines the basic ideas within the e-maintenance concept and then provides an overview of the current research and challenges in this emerging field. An underlying objective is to identify the industrial/academic actors involved in the technological, organizational or management issues related to the development of e-maintenance. Today, this heterogeneous community has to be federated in order to bring up e-maintenance as a new scientific discipline. r 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840138_18",
"text": "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose “Answerer in Questioner’s Mind” (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: “MNIST Counting Dialog” and “GuessWhat?!.” In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.",
"title": ""
},
{
"docid": "neg:1840138_19",
"text": "BACKGROUND\nDengue is re-emerging throughout the tropical world, causing frequent recurrent epidemics. The initial clinical manifestation of dengue often is confused with other febrile states confounding both clinical management and disease surveillance. Evidence-based triage strategies that identify individuals likely to be in the early stages of dengue illness can direct patient stratification for clinical investigations, management, and virological surveillance. Here we report the identification of algorithms that differentiate dengue from other febrile illnesses in the primary care setting and predict severe disease in adults.\n\n\nMETHODS AND FINDINGS\nA total of 1,200 patients presenting in the first 72 hours of acute febrile illness were recruited and followed up for up to a 4-week period prospectively; 1,012 of these were recruited from Singapore and 188 from Vietnam. Of these, 364 were dengue RT-PCR positive; 173 had dengue fever, 171 had dengue hemorrhagic fever, and 20 had dengue shock syndrome as final diagnosis. Using a C4.5 decision tree classifier for analysis of all clinical, haematological, and virological data, we obtained a diagnostic algorithm that differentiates dengue from non-dengue febrile illness with an accuracy of 84.7%. The algorithm can be used differently in different disease prevalence to yield clinically useful positive and negative predictive values. Furthermore, an algorithm using platelet count, crossover threshold value of a real-time RT-PCR for dengue viral RNA, and presence of pre-existing anti-dengue IgG antibodies in sequential order identified cases with sensitivity and specificity of 78.2% and 80.2%, respectively, that eventually developed thrombocytopenia of 50,000 platelet/mm(3) or less, a level previously shown to be associated with haemorrhage and shock in adults with dengue fever.\n\n\nCONCLUSION\nThis study shows a proof-of-concept that decision algorithms using simple clinical and haematological parameters can predict diagnosis and prognosis of dengue disease, a finding that could prove useful in disease management and surveillance.",
"title": ""
}
] |
1840139 | Fitness Gamification : Concepts , Characteristics , and Applications | [
{
"docid": "pos:1840139_0",
"text": "OBJECTIVES\nTo systematically review levels of metabolic expenditure and changes in activity patterns associated with active video game (AVG) play in children and to provide directions for future research efforts.\n\n\nDATA SOURCES\nA review of the English-language literature (January 1, 1998, to January 1, 2010) via ISI Web of Knowledge, PubMed, and Scholars Portal using the following keywords: video game, exergame, physical activity, fitness, exercise, energy metabolism, energy expenditure, heart rate, disability, injury, musculoskeletal, enjoyment, adherence, and motivation.\n\n\nSTUDY SELECTION\nOnly studies involving youth (< or = 21 years) and reporting measures of energy expenditure, activity patterns, physiological risks and benefits, and enjoyment and motivation associated with mainstream AVGs were included. Eighteen studies met the inclusion criteria. Articles were reviewed and data were extracted and synthesized by 2 independent reviewers. MAIN OUTCOME EXPOSURES: Energy expenditure during AVG play compared with rest (12 studies) and activity associated with AVG exposure (6 studies).\n\n\nMAIN OUTCOME MEASURES\nPercentage increase in energy expenditure and heart rate (from rest).\n\n\nRESULTS\nActivity levels during AVG play were highly variable, with mean (SD) percentage increases of 222% (100%) in energy expenditure and 64% (20%) in heart rate. Energy expenditure was significantly lower for games played primarily through upper body movements compared with those that engaged the lower body (difference, -148%; 95% confidence interval, -231% to -66%; P = .001).\n\n\nCONCLUSIONS\nThe AVGs enable light to moderate physical activity. Limited evidence is available to draw conclusions on the long-term efficacy of AVGs for physical activity promotion.",
"title": ""
},
{
"docid": "pos:1840139_1",
"text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.",
"title": ""
}
] | [
{
"docid": "neg:1840139_0",
"text": "As the wide popularization of online social networks, online users are not content only with keeping online friendship with social friends in real life any more. They hope the system designers can help them exploring new friends with common interest. However, the large amount of online users and their diverse and dynamic interests possess great challenges to support such a novel feature in online social networks. In this paper, by leveraging interest-based features, we design a general friend recommendation framework, which can characterize user interest in two dimensions: context (location, time) and content, as well as combining domain knowledge to improve recommending quality. We also design a potential friend recommender system in a real online social network of biology field to show the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "neg:1840139_1",
"text": "Cyber-Physical Systems (CPS) are integrations of computation with physical processes. Embedded computers and networks monitor and control the physical processes, usually with feedback loops where physical processes affect computations and vice versa. In the physical world, the passage of time is inexorable and concurrency is intrinsic. Neither of these properties is present in today’s computing and networking abstractions. I argue that the mismatch between these abstractions and properties of physical processes impede technical progress, and I identify promising technologies for research and investment. There are technical approaches that partially bridge the abstraction gap today (such as real-time operating systems, middleware technologies, specialized embedded processor architectures, and specialized networks), and there is certainly considerable room for improvement of these technologies. However, it may be that we need a less incremental approach, where new abstractions are built from the ground up. The foundations of computing are built on the premise that the principal task of computers is transformation of data. Yet we know that the technology is capable of far richer interactions the physical world. I critically examine the foundations that have been built over the last several decades, and determine where the technology and theory bottlenecks and opportunities lie. I argue for a new systems science that is jointly physical and computational.",
"title": ""
},
{
"docid": "neg:1840139_2",
"text": "We introduce an over-sketching interface for feature-preserving surface mesh editing. The user sketches a stroke that is the suggested position of part of a silhouette of the displayed surface. The system then segments all image-space silhouettes of the projected surface, identifies among all silhouette segments the best matching part, derives vertices in the surface mesh corresponding to the silhouette part, selects a sub-region of the mesh to be modified, and feeds appropriately modified vertex positions together with the sub-mesh into a mesh deformation tool. The overall algorithm has been designed to enable interactive modification of the surface --- yielding a surface editing system that comes close to the experience of sketching 3D models on paper.",
"title": ""
},
{
"docid": "neg:1840139_3",
"text": "The design of an ultra wideband aperture-coupled vertical microstrip-microstrip transition is presented. The proposed transition exploits broadside coupling between exponentially tapered microstrip patches at the top and bottom layers via an exponentially tapered slot at the mid layer. The theoretical analysis indicates that the best performance concerning the insertion loss and the return loss over the maximum possible bandwidth can be achieved when the coupling factor is equal to 0.75 (or 2.5 dB). The calculated and simulated results show that the proposed transition has a linear phase performance, an important factor for distortionless pulse operation, with less than 0.4 dB insertion loss and more than 17 dB return loss across the frequency band 3.1 GHz to 10.6 GHz.",
"title": ""
},
{
"docid": "neg:1840139_4",
"text": "With the rapid development of very large scale integration (VLSI) and continuous scaling in the metal oxide semiconductor field effect transistor (MOSFET), pad corrosion in the aluminum (Al) pad surface has become practical concern in the semiconductor industry. This paper presents a new method to improve the pad corrosion on Al pad surface by using new Al/Ti/TiN film stack. The effects of different Al film stacks on the Al pad corrosion have been investigated. The experiment results show that the Al/Ti/TiN film stack could improve bond pad corrosion effectively comparing to Al/SiON film stack. Wafers processed with new Al film stack were stored up to 28 days and display no pad crystal (PDCY) defects on bond pad surfaces.",
"title": ""
},
{
"docid": "neg:1840139_5",
"text": "Evolutionary learning is one of the most popular techniques for designing quantitative investment (QI) products. Trend following (TF) strategies, owing to their briefness and efficiency, are widely accepted by investors. Surprisingly, to the best of our knowledge, no related research has investigated TF investment strategies within an evolutionary learning model. This paper proposes a hybrid long-term and short-term evolutionary trend following algorithm (eTrend) that combines TF investment strategies with the eXtended Classifier Systems (XCS). The proposed eTrend algorithm has two advantages: (1) the combination of stock investment strategies (i.e., TF) and evolutionary learning (i.e., XCS) can significantly improve computation effectiveness and model practicability, and (2) XCS can automatically adapt to market directions and uncover reasonable and understandable trading rules for further analysis, which can help avoid the irrational trading behaviors of common investors. To evaluate eTrend, experiments are carried out using the daily trading data stream of three famous indexes in the Shanghai Stock Exchange. Experimental results indicate that eTrend outperforms the buy-and-hold strategy with high Sortino ratio after the transaction cost. Its performance is also superior to the decision tree and artificial neural network trading models. Furthermore, as the concept drift phenomenon is common in the stock market, an exploratory concept drift analysis is conducted on the trading rules discovered in bear and bull market phases. The analysis revealed interesting and rational results. In conclusion, this paper presents convincing evidence that the proposed hybrid trend following model can indeed generate effective trading guid-",
"title": ""
},
{
"docid": "neg:1840139_6",
"text": "The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.",
"title": ""
},
{
"docid": "neg:1840139_7",
"text": "Large projects are increasingly adopting agile development practices, and this raises new challenges for research. The workshop on principles of large-scale agile development focused on central topics in large-scale: the role of architecture, inter-team coordination, portfolio management and scaling agile practices. We propose eight principles for large-scale agile development, and present a revised research agenda.",
"title": ""
},
{
"docid": "neg:1840139_8",
"text": "A wideband bandpass filter (BPF) with reconfigurable bandwidth (BW) is proposed based on a parallel-coupled line structure and a cross-shaped resonator with open stubs. The p-i-n diodes are used as the tuning elements, which can implement three reconfigurable BW states. The prototype of the designed filter reports an absolute BW tuning range of 1.22 GHz, while the fractional BW is varied from 34.8% to 56.5% when centered at 5.7 GHz. The simulation and measured results are in good agreement. Comparing with previous works, the proposed reconfigurable BPF features wider BW tuning range with maximum number of tuning states.",
"title": ""
},
{
"docid": "neg:1840139_9",
"text": "The purpose of a Beyond 4G (B4G) radio access technology, is to cope with the expected exponential increase of mobile data traffic in local area (LA). The requirements related to physical layer control signaling latencies and to hybrid ARQ (HARQ) round trip time (RTT) are in the order of ~1ms. In this paper, we propose a flexible orthogonal frequency division multiplexing (OFDM) based time division duplex (TDD) physical subframe structure optimized for B4G LA environment. We show that the proposed optimizations allow very frequent link direction switching, thus reaching the tight B4G HARQ RTT requirement and significant control signaling latency reductions compared to existing LTE-Advanced and WiMAX technologies.",
"title": ""
},
{
"docid": "neg:1840139_10",
"text": "Traffic Accidents are occurring due to development of automobile industry and the accidents are unavoidable even the traffic rules are very strictly maintained. Data mining algorithm is applied to model the traffic accident injury level by using traffic accident dataset. It helped by obtaining the characteristics of drivers behavior, road condition and weather condition, Accident severity that are connected with different injury severities and death. This paper presents some models to predict the severity of injury using some data mining algorithms. The study focused on collecting the real data from previous research and obtains the injury severity level of traffic accident data.",
"title": ""
},
{
"docid": "neg:1840139_11",
"text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.",
"title": ""
},
{
"docid": "neg:1840139_12",
"text": "In this manuscript we explore the ways in which the marketplace metaphor resonates with online dating participants and how this conceptual framework influences how they assess themselves, assess others, and make decisions about whom to pursue. Taking a metaphor approach enables us to highlight the ways in which participants’ language shapes their self-concept and interactions with potential partners. Qualitative analysis of in-depth interviews with 34 participants from a large online dating site revealed that the marketplace metaphor was salient for participants, who employed several strategies that reflected the assumptions underlying the marketplace perspective (including resisting the metaphor). We explore the implications of this metaphor for romantic relationship development, such as the objectification of potential partners. Journal of Social and Personal Relationships © The Author(s), 2010. Reprints and permissions: sagepub.co.uk/journalsPermissions.nav, Vol. 27(4): 427–447. DOI: 10.1177/0265407510361614 This research was funded by Affirmative Action Grant 111579 from the Office of Research and Sponsored Programs at California State University, Stanislaus. An earlier version of this paper was presented at the International Communication Association, 2005. We would like to thank Jack Bratich, Art Ramirez, Lamar Reinsch, Jeanine Turner, and three anonymous reviewers for their helpful comments. All correspondence concerning this article should be addressed to Rebecca D. Heino, Georgetown University, McDonough School of Business, Washington D.C. 20057, USA [e-mail: rdh26@georgetown.edu]. Larry Erbert was the Action Editor on this article. at MICHIGAN STATE UNIV LIBRARIES on June 9, 2010 http://spr.sagepub.com Downloaded from",
"title": ""
},
{
"docid": "neg:1840139_13",
"text": "In the past 2 decades, correlational and experimental studies have found a positive association between violent video game play and aggression. There is less evidence, however, to support a long-term relation between these behaviors. This study examined sustained violent video game play and adolescent aggressive behavior across the high school years and directly assessed the socialization (violent video game play predicts aggression over time) versus selection hypotheses (aggression predicts violent video game play over time). Adolescents (N = 1,492, 50.8% female) were surveyed annually from Grade 9 to Grade 12 about their video game play and aggressive behaviors. Nonviolent video game play, frequency of overall video game play, and a comprehensive set of potential 3rd variables were included as covariates in each analysis. Sustained violent video game play was significantly related to steeper increases in adolescents' trajectory of aggressive behavior over time. Moreover, greater violent video game play predicted higher levels of aggression over time, after controlling for previous levels of aggression, supporting the socialization hypothesis. In contrast, no support was found for the selection hypothesis. Nonviolent video game play also did not predict higher levels of aggressive behavior over time. Our findings, and the fact that many adolescents play video games for several hours every day, underscore the need for a greater understanding of the long-term relation between violent video games and aggression, as well as the specific game characteristics (e.g., violent content, competition, pace of action) that may be responsible for this association.",
"title": ""
},
{
"docid": "neg:1840139_14",
"text": "Chondrosarcomas are indolent but invasive chondroid malignancies that can form in the skull base. Standard management of chondrosarcoma involves surgical resection and adjuvant radiation therapy. This review evaluates evidence from the literature to assess the importance of the surgical approach and extent of resection on outcomes for patients with skull base chondrosarcoma. Also evaluated is the ability of the multiple modalities of radiation therapy, such as conventional fractionated radiotherapy, proton beam, and stereotactic radiosurgery, to control tumor growth. Finally, emerging therapies for the treatment of skull-base chondrosarcoma are discussed.",
"title": ""
},
{
"docid": "neg:1840139_15",
"text": "Periodic inspection of a hanger rope is needed for the effective maintenance of suspension bridge. However, it is dangerous for human workers to access the hanger rope and not easy to check the exact state of the hanger rope. In this work we have developed a wheel-based robot that can approach the hanger rope instead of the human worker and carry the inspection device which is able to examine the inside status of the hanger rope. Meanwhile, a wheel-based cable climbing robot may be badly affected by the vibration that is generated while the robot moves on the bumpy surface of the hanger rope. The caterpillar is able to safely drive with the wide contact face on the rough terrain. Accordingly, we developed the caterpillar that can be combined with the developed cable climbing robot. In this paper, the caterpillar is introduced and its performance is compared with the wheel-based cable climbing robot.",
"title": ""
},
{
"docid": "neg:1840139_16",
"text": "We present our 11-layers deep, double-pathway, 3D Convolutional Neural Network, developed for the segmentation of brain lesions. The developed system segments pathology voxel-wise after processing a corresponding multi-modal 3D patch at multiple scales. We demonstrate that it is possible to train such a deep and wide 3D CNN on a small dataset of 28 cases. Our network yields promising results on the task of segmenting ischemic stroke lesions, accomplishing a mean Dice of 64% (66% after postprocessing) on the ISLES 2015 training dataset, ranking among the top entries. Regardless its size, our network is capable of processing a 3D brain volume in 3 minutes, making it applicable to the automated analysis of larger study cohorts.",
"title": ""
},
{
"docid": "neg:1840139_17",
"text": "In the field of non-monotonic logics, the notion of rational closure is acknowledged as a landmark, and we are going to see that such a construction can be characterised by means of a simple method in the context of propositional logic. We then propose an application of our approach to rational closure in the field of Description Logics, an important knowledge representation formalism, and provide a simple decision procedure for this case.",
"title": ""
},
{
"docid": "neg:1840139_18",
"text": "Age progression is defined as aesthetically re-rendering the aging face at any future age for an individual face. In this work, we aim to automatically render aging faces in a personalized way. Basically, for each age group, we learn an aging dictionary to reveal its aging characteristics (e.g., wrinkles), where the dictionary bases corresponding to the same index yet from two neighboring aging dictionaries form a particular aging pattern cross these two age groups, and a linear combination of all these patterns expresses a particular personalized aging process. Moreover, two factors are taken into consideration in the dictionary learning process. First, beyond the aging dictionaries, each person may have extra personalized facial characteristics, e.g., mole, which are invariant in the aging process. Second, it is challenging or even impossible to collect faces of all age groups for a particular person, yet much easier and more practical to get face pairs from neighboring age groups. To this end, we propose a novel Bi-level Dictionary Learning based Personalized Age Progression (BDL-PAP) method. Here, bi-level dictionary learning is formulated to learn the aging dictionaries based on face pairs from neighboring age groups. Extensive experiments well demonstrate the advantages of the proposed BDL-PAP over other state-of-the-arts in term of personalized age progression, as well as the performance gain for cross-age face verification by synthesizing aging faces.",
"title": ""
},
{
"docid": "neg:1840139_19",
"text": "For the last 10 years, interest has grown in low frequency shear waves that propagate in the human body. However, the generation of shear waves by acoustic vibrators is a relatively complex problem, and the directivity patterns of shear waves produced by the usual vibrators are more complicated than those obtained for longitudinal ultrasonic transducers. To extract shear modulus parameters from the shear wave propagation in soft tissues, it is important to understand and to optimize the directivity pattern of shear wave vibrators. This paper is devoted to a careful study of the theoretical and the experimental directivity pattern produced by a point source in soft tissues. Both theoretical and experimental measurements show that the directivity pattern of a point source vibrator presents two very strong lobes for an angle around 35/spl deg/. This paper also points out the impact of the near field in the problem of shear wave generation.",
"title": ""
}
] |
1840140 | The Factor Structure of the System Usability Scale | [
{
"docid": "pos:1840140_0",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "pos:1840140_1",
"text": "Correlations between prototypical usability metrics from 90 distinct usability tests were strong when measured at the task-level (r between .44 and .60). Using test-level satisfaction ratings instead of task-level ratings attenuated the correlations (r between .16 and .24). The method of aggregating data from a usability test had a significant effect on the magnitude of the resulting correlations. The results of principal components and factor analyses on the prototypical usability metrics provided evidence for an underlying construct of general usability with objective and subjective factors.",
"title": ""
}
] | [
{
"docid": "neg:1840140_0",
"text": "We study the problem of representation learning in heterogeneous networks. Its unique challenges come from the existence of multiple types of nodes and links, which limit the feasibility of the conventional network embedding techniques. We develop two scalable representation learning models, namely metapath2vec and metapath2vec++. The metapath2vec model formalizes meta-path-based random walks to construct the heterogeneous neighborhood of a node and then leverages a heterogeneous skip-gram model to perform node embeddings. The metapath2vec++ model further enables the simultaneous modeling of structural and semantic correlations in heterogeneous networks. Extensive experiments show that metapath2vec and metapath2vec++ are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, such as node classification, clustering, and similarity search, but also discern the structural and semantic correlations between diverse network objects.",
"title": ""
},
{
"docid": "neg:1840140_1",
"text": "Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised algorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point location. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing textureand color-based region discriminant functions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways.",
"title": ""
},
{
"docid": "neg:1840140_2",
"text": "Recently, reinforcement learning has been successfully applied to the logical game of Go, various Atari games, and even a 3D game, Labyrinth, though it continues to have problems in sparse reward settings. It is difficult to explore, but also difficult to exploit, a small number of successes when learning policy. To solve this issue, the subgoal and option framework have been proposed. However, discovering subgoals online is too expensive to be used to learn options in large state spaces. We propose Micro-objective learning (MOL) to solve this problem. The main idea is to estimate how important a state is while training and to give an additional reward proportional to its importance. We evaluated our algorithm in two Atari games: Montezuma’s Revenge and Seaquest. With three experiments to each game, MOL significantly improved the baseline scores. Especially in Montezuma’s Revenge, MOL achieved two times better results than the previous state-of-the-art model.",
"title": ""
},
{
"docid": "neg:1840140_3",
"text": "The two critical factors distinguishing inventory management in a multifirm supply-chain context from the more traditional centrally planned perspective are incentive conflicts and information asymmetries. We study the well-known order quantity/reorder point (Q r) model in a two-player context, using a framework inspired by observations during a case study. We show how traditional allocations of decision rights to supplier and buyer lead to inefficient outcomes, and we use principal-agent models to study the effects of information asymmetries about setup cost and backorder cost, respectively. We analyze two “opposite” models of contracting on inventory policies. First, we derive the buyer’s optimal menu of contracts when the supplier has private information about setup cost, and we show how consignment stock can help reduce the impact of this information asymmetry. Next, we study consignment and assume the supplier cannot observe the buyer’s backorder cost. We derive the supplier’s optimal menu of contracts on consigned stock level and show that in this case, the supplier effectively has to overcompensate the buyer for the cost of each stockout. Our theoretical analysis and the case study suggest that consignment stock helps reduce cycle stock by providing the supplier with an additional incentive to decrease batch size, but simultaneously gives the buyer an incentive to increase safety stock by exaggerating backorder costs. This framework immediately points to practical recommendations on how supply-chain incentives should be realigned to overcome existing information asymmetries.",
"title": ""
},
{
"docid": "neg:1840140_4",
"text": "Tasks that demand externalized attention reliably suppress default network activity while activating the dorsal attention network. These networks have an intrinsic competitive relationship; activation of one suppresses activity of the other. Consequently, many assume that default network activity is suppressed during goal-directed cognition. We challenge this assumption in an fMRI study of planning. Recent studies link default network activity with internally focused cognition, such as imagining personal future events, suggesting a role in autobiographical planning. However, it is unclear how goal-directed cognition with an internal focus is mediated by these opposing networks. A third anatomically interposed 'frontoparietal control network' might mediate planning across domains, flexibly coupling with either the default or dorsal attention network in support of internally versus externally focused goal-directed cognition, respectively. We tested this hypothesis by analyzing brain activity during autobiographical versus visuospatial planning. Autobiographical planning engaged the default network, whereas visuospatial planning engaged the dorsal attention network, consistent with the anti-correlated domains of internalized and externalized cognition. Critically, both planning tasks engaged the frontoparietal control network. Task-related activation of these three networks was anatomically consistent with independently defined resting-state functional connectivity MRI maps. Task-related functional connectivity analyses demonstrate that the default network can be involved in goal-directed cognition when its activity is coupled with the frontoparietal control network. Additionally, the frontoparietal control network may flexibly couple with the default and dorsal attention networks according to task domain, serving as a cortical mediator linking the two networks in support of goal-directed cognitive processes.",
"title": ""
},
{
"docid": "neg:1840140_5",
"text": "Supervised object detection and semantic segmentation require object or even pixel level annotations. When there exist image level labels only, it is challenging for weakly supervised algorithms to achieve accurate predictions. The accuracy achieved by top weakly supervised algorithms is still significantly lower than their fully supervised counterparts. In this paper, we propose a novel weakly supervised curriculum learning pipeline for multi-label object recognition, detection and semantic segmentation. In this pipeline, we first obtain intermediate object localization and pixel labeling results for the training images, and then use such results to train task-specific deep networks in a fully supervised manner. The entire process consists of four stages, including object localization in the training images, filtering and fusing object instances, pixel labeling for the training images, and task-specific network training. To obtain clean object instances in the training images, we propose a novel algorithm for filtering, fusing and classifying object instances collected from multiple solution mechanisms. In this algorithm, we incorporate both metric learning and density-based clustering to filter detected object instances. Experiments show that our weakly supervised pipeline achieves state-of-the-art results in multi-label image classification as well as weakly supervised object detection and very competitive results in weakly supervised semantic segmentation on MS-COCO, PASCAL VOC 2007 and PASCAL VOC 2012.",
"title": ""
},
{
"docid": "neg:1840140_6",
"text": "Rheology, as a branch of physics, studies the deformation and flow of matter in response to an applied stress or strain. According to the materials’ behaviour, they can be classified as Newtonian or non-Newtonian (Steffe, 1996; Schramm, 2004). The most of the foodstuffs exhibit properties of non-Newtonian viscoelastic systems (Abang Zaidel et al., 2010). Among them, the dough can be considered as the most unique system from the point of material science. It is viscoelastic system which exhibits shear-thinning and thixotropic behaviour (Weipert, 1990). This behaviour is the consequence of dough complex structure in which starch granules (75-80%) are surrounded by three-dimensional protein (20-25%) network (Bloksma, 1990, as cited in Weipert, 2006). Wheat proteins are consisted of gluten proteins (80-85% of total wheat protein) which comprise of prolamins (in wheat gliadins) and glutelins (in wheat glutenins) and non gluten proteins (15-20% of the total wheat proteins) such as albumins and globulins (Veraverbeke & Delcour, 2002). Gluten complex is a viscoelastic protein responsible for dough structure formation. Among the cereal technologists, rheology is widely recognized as a valuable tool in quality assessment of flour. Hence, in the cereal scientific community, rheological measurements are generally employed throughout the whole processing chain in order to monitor the mechanical properties, molecular structure and composition of the material, to imitate materials’ behaviour during processing and to anticipate the quality of the final product (Dobraszczyk & Morgenstern, 2003). Rheology is particularly important technique in revealing the influence of flour constituents and additives on dough behaviour during breadmaking. There are many test methods available to measure rheological properties, which are commonly divided into empirical (descriptive, imitative) and fundamental (basic) (Scott Blair, 1958 as cited in Weipert, 1990). Although being criticized due to their shortcomings concerning inflexibility in defining the level of deforming force, usage of strong deformation forces, interpretation of results in relative non-SI units, large sample requirements and its impossibility to define rheological parameters such as stress, strain, modulus or viscosity (Weipert, 1990; Dobraszczyk & Morgenstern, 2003), empirical rheological measurements are still indispensable in the cereal quality laboratories. According to the empirical rheological parameters it is possible to determine the optimal flour quality for a particular purpose. The empirical techniques used for dough quality",
"title": ""
},
{
"docid": "neg:1840140_7",
"text": "It is argued that, hidden within the flow of signals from typical cameras, through image processing, to display media, is a homomorphic filter. While homomorphic filtering is often desirable, there are some occasions where it is not. Thus, cancellation of this implicit homomorphic filter is proposed, through the introduction of an antihomomorphic filter. This concept gives rise to the principle of quantigraphic image processing, wherein it is argued that most cameras can be modeled as an array of idealized light meters each linearly responsive to a semi-monotonic function of the quantity of light received, integrated over a fixed spectral response profile. This quantity depends only on the spectral response of the sensor elements in the camera. A particular class of functional equations, called comparametric equations, is introduced as a basis for quantigraphic image processing. These are fundamental to the analysis and processing of multiple images differing only in exposure. The \"gamma correction\" of an image is presented as a simple example of a comparametric equation, for which it is shown that the underlying quantigraphic function does not pass through the origin. Thus, it is argued that exposure adjustment by gamma correction is inherently flawed, and alternatives are provided. These alternatives, when applied to a plurality of images that differ only in exposure, give rise to a new kind of processing in the \"amplitude domain\". The theoretical framework presented in this paper is applicable to the processing of images from nearly all types of modern cameras. This paper is a much revised draft of a 1992 peer-reviewed but unpublished report by the author, entitled \"Lightspace and the Wyckoff principle.\"",
"title": ""
},
{
"docid": "neg:1840140_8",
"text": "Word2Vec is a widely used algorithm for extracting low-dimensional vector representations of words. It generated considerable excitement in the machine learning and natural language processing (NLP) communities recently due to its exceptional performance in many NLP applications such as named entity recognition, sentiment analysis, machine translation and question answering. State-of-the-art algorithms including those by Mikolov et al. have been parallelized for multi-core CPU architectures but are based on vector-vector operations that are memory-bandwidth intensive and do not efficiently use computational resources. In this paper, we improve reuse of various data structures in the algorithm through the use of minibatching, hence allowing us to express the problem using matrix multiply operations. We also explore different techniques to distribute word2vec computation across nodes in a compute cluster, and demonstrate good strong scalability up to 32 nodes. In combination, these techniques allow us to scale up the computation near linearly across cores and nodes, and process hundreds of millions of words per second, which is the fastest word2vec implementation to the best of our knowledge.",
"title": ""
},
{
"docid": "neg:1840140_9",
"text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.",
"title": ""
},
{
"docid": "neg:1840140_10",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "neg:1840140_11",
"text": "In this paper we present a neural network based system for automated e-mail filing into folders and antispam filtering. The experiments show that it is more accurate than several other techniques. We also investigate the effects of various feature selection, weighting and normalization methods, and also the portability of the anti-spam filter across different users.",
"title": ""
},
{
"docid": "neg:1840140_12",
"text": "The complexity of the visual world creates significant challenges for comprehensive visual understanding. In spite of recent successes in visual recognition, today’s vision systems would still struggle to deal with visual queries that require a deeper reasoning. We propose a knowledge base (KB) framework to handle an assortment of visual queries, without the need to train new classifiers for new tasks. Building such a large-scale multimodal KB presents a major challenge of scalability. We cast a large-scale MRF into a KB representation, incorporating visual, textual and structured data, as well as their diverse relations. We introduce a scalable knowledge base construction system that is capable of building a KB with half billion variables and millions of parameters in a few hours. Our system achieves competitive results compared to purpose-built models on standard recognition and retrieval tasks, while exhibiting greater flexibility in answering richer visual queries.",
"title": ""
},
{
"docid": "neg:1840140_13",
"text": "Commitment problem in credit market and its eãects on economic growth are discussed. Completions of investment projects increase capital stock of the economy. These projects require credits which are ånanced by ånacial intermediaries. A simpliåed credit model of Dewatripont and Maskin is used to describe the ånancing process, in which the commitment problem or the \\soft budget constraint\" problem arises. However, in dynamic general equilibrium setup with endougenous determination of value and cost of projects, there arise multiple equilibria in the project ånancing model, namely reånancing equilirium and no-reånancing equilibrium. The former leads the economy to the stationary state with smaller capital stock level than the latter. Both the elimination of reånancing equilibrium and the possibility of \\Animal Spirits Cycles\" equilibrium are also discussed.",
"title": ""
},
{
"docid": "neg:1840140_14",
"text": "Simultaneous localization and mapping (SLAM) is the process by which a mobile robot can build a map of the environment and, at the same time, use this map to compute its location. The past decade has seen rapid and exciting progress in solving the SLAM problem together with many compelling implementations of SLAM methods. The great majority of work has focused on improving computational efficiency while ensuring consistent and accurate estimates for the map and vehicle pose. However, there has also been much research on issues such as nonlinearity, data association, and landmark characterization, all of which are vital in achieving a practical and robust SLAM implementation. This tutorial focuses on the recursive Bayesian formulation of the SLAM problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. Part I of this tutorial (IEEE Robotics & Auomation Magazine, vol. 13, no. 2) surveyed the development of the essential SLAM algorithm in state-space and particle-filter form, described a number of key implementations, and cited locations of source code and real-world data for evaluation of SLAM algorithms. Part II of this tutorial (this article), surveys the current state of the art in SLAM research with a focus on three key areas: computational complexity, data association, and environment representation. Much of the mathematical notation and essential concepts used in this article are defined in Part I of this tutorial and, therefore, are not repeated here. SLAM, in its naive form, scales quadratically with the number of landmarks in a map. For real-time implementation, this scaling is potentially a substantial limitation in the use of SLAM methods. The complexity section surveys the many approaches that have been developed to reduce this complexity. These include linear-time state augmentation, sparsification in information form, partitioned updates, and submapping methods. A second major hurdle to overcome in the implementation of SLAM methods is to correctly associate observations of landmarks with landmarks held in the map. Incorrect association can lead to catastrophic failure of the SLAM algorithm. Data association is particularly important when a vehicle returns to a previously mapped region after a long excursion, the so-called loop-closure problem. The data association section surveys current data association methods used in SLAM. These include batch-validation methods that exploit constraints inherent in the SLAM formulation, appearance-based methods, and multihypothesis techniques. The third development discussed in this tutorial is the trend towards richer appearance-based models of landmarks and maps. While initially motivated by problems in data association and loop closure, these methods have resulted in qualitatively different methods of describing the SLAM problem, focusing on trajectory estimation rather than landmark estimation. The environment representation section surveys current developments in this area along a number of lines, including delayed mapping, the use of nongeometric landmarks, and trajectory estimation methods. SLAM methods have now reached a state of considerable maturity. Future challenges will center on methods enabling large-scale implementations in increasingly unstructured environments and especially in situations where GPS-like solutions are unavailable or unreliable: in urban canyons, under foliage, under water, or on remote planets.",
"title": ""
},
{
"docid": "neg:1840140_15",
"text": "World Wide Web Consortium (W3C) is the international standards organization for the World Wide Web (www). It develops standards, specifications and recommendations to enhance the interoperability and maximize consensus about the content of the web and define major parts of what makes the World Wide Web work. Phishing is a type of Internet scams that seeks to get a user‟s credentials by fraud websites, such as passwords, credit card numbers, bank account details and other sensitive information. There are some characteristics in webpage source code that distinguish phishing websites from legitimate websites and violate the w3c standards, so we can detect the phishing attacks by check the webpage and search for these characteristics in the source code file if it exists or not. In this paper, we propose a phishing detection approach based on checking the webpage source code, we extract some phishing characteristics out of the W3C standards to evaluate the security of the websites, and check each character in the webpage source code, if we find a phishing character, we will decrease from the initial secure weight. Finally we calculate the security percentage based on the final weight, the high percentage indicates secure website and others indicates the website is most likely to be a phishing website. We check two webpage source codes for legitimate and phishing websites and compare the security percentages between them, we find the phishing website is less security percentage than the legitimate website; our approach can detect the phishing website based on checking phishing characteristics in the webpage source code.",
"title": ""
},
{
"docid": "neg:1840140_16",
"text": "A green synthesis of highly stable gold and silver nanoparticles (NPs) using arabinoxylan (AX) from ispaghula (Plantago ovata) seed husk is being reported. The NPs were synthesized by stirring a mixture of AX and HAuCl(4)·H(2)O or AgNO(3), separately, below 100 °C for less than an hour, where AX worked as the reducing and the stabilizing agent. The synthesized NPs were characterized by surface plasmon resonance (SPR) spectroscopy, transmission electron microscopy (TEM), atomic force microscopy (AFM), and X-ray diffraction (XRD). The particle size was (silver: 5-20 nm and gold: 8-30 nm) found to be dependent on pH, temperature, reaction time and concentrations of AX and the metal salts used. The NPs were poly-dispersed with a narrow range. They were stable for more than two years time.",
"title": ""
},
{
"docid": "neg:1840140_17",
"text": "The concept of supply chain is about managing coordinated information and material flows, plant operations, and logistics. It provides flexibility and agility in responding to consumer demand shifts without cost overlays in resource utilization. The fundamental premise of this philosophy is; synchronization among multiple autonomous business entities represented in it. That is, improved coordination within and between various supply-chain members. Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decision-making processes, and improvement in the overall performance of each member as well as the supply chain. Describes architecture to create the appropriate structure, install proper controls, and implement principles of optimization to synchronize the supply chain. A supply-chain model based on a collaborative system approach is illustrated utilizing the example of the textile industry. process flexibility and coordination of processes across many sites. More and more organizations are promoting employee empowerment and the need for rules-based, real-time decision support systems to attain organizational and process flexibility, as well as to respond to competitive pressure to introduce new products more quickly, cheaply and of improved quality. The underlying philosophy of managing supply chains has evolved to respond to these changing business trends. Supply-chain management phenomenon has received the attention of researchers and practitioners in various topics. In the earlier years, the emphasis was on materials planning utilizing materials requirements planning techniques, inventory logistics management with one warehouse multi-retailer distribution system, and push and pull operation techniques for production systems. In the last few years, however, there has been a renewed interest in designing and implementing integrated systems, such as enterprise resource planning, multi-echelon inventory, and synchronous-flow manufacturing, respectively. A number of factors have contributed to this shift. First, there has been a realization that better planning and management of complex interrelated systems, such as materials planning, inventory management, capacity planning, logistics, and production systems will lead to overall improvement in enterprise productivity. Second, advances in information and communication technologies complemented by sophisticated decision support systems enable the designing, implementing and controlling of the strategic and tactical strategies essential to delivery of integrated systems. In the next section, a framework that offers an unified approach to dealing with enterprise related problems is presented. A framework for analysis of enterprise integration issues As mentioned in the preceding section, the availability of advanced production and logistics management systems has the potential of fundamentally influencing enterprise integration issues. The motivation in pursuing research issues described in this paper is to propose a framework that enables dealing with these effectively. The approach suggested in this paper utilizing supply-chain philosophy for enterprise integration proposes domain independent problem solving and modeling, and domain dependent analysis and implementation. The purpose of the approach is to ascertain characteristics of the problem independent of the specific problem environment. Consequently, the approach delivers solution(s) or the solution method that are intrinsic to the problem and not its environment. Analysis methods help to understand characteristics of the solution methodology, as well as providing specific guarantees of effectiveness. Invariably, insights gained from these analyses can be used to develop effective problem solving tools and techniques for complex enterprise integration problems. The discussion of the framework is organized as follows. First, the key guiding principles of the proposed framework on which a supply chain ought to be built are outlined. Then, a cooperative supply-chain (CSC) system is described as a special class of a supply-chain network implementation. Next, discussion on a distributed problemsolving strategy that could be employed in integrating this type of system is presented. Following this, key components of a CSC system are described. Finally, insights on modeling a CSC system are offered. Key modeling principles are elaborated through two distinct modeling approaches in the management science discipline. Supply chain guiding principles Firms have increasingly been adopting enterprise/supply-chain management techniques in order to deal with integration issues. To focus on these integration efforts, the following guiding principles for the supply-chain framework are proposed. These principles encapsulate trends in production and logistics management that a supplychain arrangement may be designed to capture. . Supply chain is a cooperative system. The supply-chain arrangement exists on cooperation among its members. Cooperation occurs in many forms, such as sharing common objectives and goals for the group entity; utilizing joint policies, for instance in marketing and production; setting up common budgets, cost and price structures; and identifying commitments on capacity, production plans, etc. . Supply chain exists on the group dynamics of its members. The existence of a supply chain is dependent on the interaction among its members. This interaction occurs in the form of exchange of information with regard to input, output, functions and controls, such as objectives and goals, and policies. By analyzing this [ 291 ] Charu Chandra and Sameer Kumar Enterprise architectural framework for supply-chain integration Industrial Management & Data Systems 101/6 [2001] 290±303 information, members of a supply chain may choose to modify their behavior attuned with group expectations. . Negotiation and compromise are norms of operation in a supply chain. In order to realize goals and objectives of the group, members negotiate on commitments made to one another for price, capacity, production plans, etc. These negotiations often lead to compromises by one or many members on these issues, leading up to realization of sub-optimal goals and objectives by members. . Supply-chain system solutions are Paretooptimal (satisficing), not optimizing. Supply-chain problems similar to many real world applications involve several objective functions of its members simultaneously. In all such applications, it is extremely rare to have one feasible solution that simultaneously optimizes all of the objective functions. Typically, optimizing one of the objective functions has the effect of moving another objective function away from its most desirable value. These are the usual conflicts among the objective functions in the multiobjective models. As a multi-objective problem, the supply-chain model produces non-dominated or Pareto-optimal solutions. That is, solutions for a supplychain problem do not leave any member worse-off at the expense of another. . Integration in supply chain is achieved through synchronization. Integration across the supply chain is achieved through synchronization of activities at the member entity and aggregating its impact through process, function, business, and on to enterprise levels, either at the member entity or the group entity. Thus, by synchronization of supply-chain components, existing bottlenecks in the system are eliminated, while future ones are prevented from occurring. A cooperative supply-chain A supply-chain network depicted in Figure 1 can be a complex web of systems, sub-systems, operations, activities, and their relationships to one another, belonging to its various members namely, suppliers, carriers, manufacturing plants, distribution centers, retailers, and consumers. The design, modeling and implementation of such a system, therefore, can be difficult, unless various parts of it are cohesively tied to the whole. The concept of a supply-chain is about managing coordinated information and material flows, plant operations, and logistics through a common set of principles, strategies, policies, and performance metrics throughout its developmental life cycle (Lee and Billington, 1993). It provides flexibility and agility in responding to consumer demand shifts with minimum cost overlays in resource utilization. The fundamental premise of this philosophy is synchronization among multiple autonomous entities represented in it. That is, improved coordination within and between various supply-chain members. Coordination is achieved within the framework of commitments made by members to each other. Members negotiate and compromise in a spirit of cooperation in order to meet these commitments. Hence, the label(CSC). Increased coordination can lead to reduction in lead times and costs, alignment of interdependent decisionmaking processes, and improvement in the overall performance of each member, as well as the supply-chain (group) (Chandra, 1997; Poirier, 1999; Tzafastas and Kapsiotis, 1994). A generic textile supply chain has for its primary raw material vendors, cotton growers and/or chemical suppliers, depending upon whether the end product is cotton, polyester or some combination of cotton and polyester garment. Secondary raw material vendors are suppliers of accessories such as, zippers, buttons, thread, garment tags, etc. Other tier suppliers in the complete pipeline are: fiber manufacturers for producing the polyester or cotton fiber yarn; textile manufacturers for weaving and dying yarn into colored textile fabric; an apparel maker for cutting, sewing and packing the garment; a distribution center for merchandising the garment; and a retailer selling the brand name garment to consumers at a shopping mall or center. Synchronization of the textile supply chain is achieved through coordination primarily of: . replenishment schedules that have be",
"title": ""
},
{
"docid": "neg:1840140_18",
"text": "The health benefits of garlic likely arise from a wide variety of components, possibly working synergistically. The complex chemistry of garlic makes it plausible that variations in processing can yield quite different preparations. Highly unstable thiosulfinates, such as allicin, disappear during processing and are quickly transformed into a variety of organosulfur components. The efficacy and safety of these preparations in preparing dietary supplements based on garlic are also contingent on the processing methods employed. Although there are many garlic supplements commercially available, they fall into one of four categories, i.e., dehydrated garlic powder, garlic oil, garlic oil macerate and aged garlic extract (AGE). Garlic and garlic supplements are consumed in many cultures for their hypolipidemic, antiplatelet and procirculatory effects. In addition to these proclaimed beneficial effects, some garlic preparations also appear to possess hepatoprotective, immune-enhancing, anticancer and chemopreventive activities. Some preparations appear to be antioxidative, whereas others may stimulate oxidation. These additional biological effects attributed to AGE may be due to compounds, such as S-allylcysteine, S-allylmercaptocysteine, N(alpha)-fructosyl arginine and others, formed during the extraction process. Although not all of the active ingredients are known, ample research suggests that several bioavailable components likely contribute to the observed beneficial effects of garlic.",
"title": ""
},
{
"docid": "neg:1840140_19",
"text": "Until recently the information technology (IT)-centricity was the prevailing paradigm in cyber security that was organized around confidentiality, integrity and availability of IT assets. Despite of its widespread usage, the weakness of IT-centric cyber security became increasingly obvious with the deployment of very large IT infrastructures and introduction of highly mobile tactical missions where the IT-centric cyber security was not able to take into account the dynamics of time and space bound behavior of missions and changes in their operational context. In this paper we will show that the move from IT-centricity towards to the notion of cyber attack resilient missions opens new opportunities in achieving the completion of mission goals even if the IT assets and services that are supporting the missions are under cyber attacks. The paper discusses several fundamental architectural principles of achieving cyber attack resilience of missions, including mission-centricity, survivability through adaptation, synergistic mission C2 and mission cyber security management, and the real-time temporal execution of the mission tasks. In order to achieve the overall system resilience and survivability under a cyber attack, both, the missions and the IT infrastructure are considered as two interacting adaptable multi-agent systems. While the paper is mostly concerned with the architectural principles of achieving cyber attack resilient missions, several models and algorithms that support resilience of missions are discussed in fairly detailed manner.",
"title": ""
}
] |
1840141 | BRISK: Binary Robust invariant scalable keypoints | [
{
"docid": "pos:1840141_0",
"text": "The efficient detection of interesting features is a crucial step for various tasks in Computer Vision. Corners are favored cues due to their two dimensional constraint and fast algorithms to detect them. Recently, a novel corner detection approach, FAST, has been presented which outperforms previous algorithms in both computational performance and repeatability. We will show how the accelerated segment test, which underlies FAST, can be significantly improved by making it more generic while increasing its performance. We do so by finding the optimal decision tree in an extended configuration space, and demonstrating how specialized trees can be combined to yield an adaptive and generic accelerated segment test. The resulting method provides high performance for arbitrary environments and so unlike FAST does not have to be adapted to a specific scene structure. We will also discuss how different test patterns affect the corner response of the accelerated segment test.",
"title": ""
}
] | [
{
"docid": "neg:1840141_0",
"text": "In this paper, we study the 3D volumetric modeling problem by adopting the Wasserstein introspective neural networks method (WINN) that was previously applied to 2D static images. We name our algorithm 3DWINN which enjoys the same properties as WINN in the 2D case: being simultaneously generative and discriminative. Compared to the existing 3D volumetric modeling approaches, 3DWINN demonstrates competitive results on several benchmarks in both the generation and the classification tasks. In addition to the standard inception score, the Fréchet Inception Distance (FID) metric is also adopted to measure the quality of 3D volumetric generations. In addition, we study adversarial attacks for volumetric data and demonstrate the robustness of 3DWINN against adversarial examples while achieving appealing results in both classification and generation within a single model. 3DWINN is a general framework and it can be applied to the emerging tasks for 3D object and scene modeling.",
"title": ""
},
{
"docid": "neg:1840141_1",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
},
{
"docid": "neg:1840141_2",
"text": "In this paper, we introduce autoencoder ensembles for unsupervised outlier detection. One problem with neural networks is that they are sensitive to noise and often require large data sets to work robustly, while increasing data size makes them slow. As a result, there are only a few existing works in the literature on the use of neural networks in outlier detection. This paper shows that neural networks can be a very competitive technique to other existing methods. The basic idea is to randomly vary on the connectivity architecture of the autoencoder to obtain significantly better performance. Furthermore, we combine this technique with an adaptive sampling method to make our approach more efficient and effective. Experimental results comparing the proposed approach with state-of-theart detectors are presented on several benchmark data sets showing the accuracy of our approach.",
"title": ""
},
{
"docid": "neg:1840141_3",
"text": "Syndromal classification is a well-developed diagnostic system but has failed to deliver on its promise of the identification of functional pathological processes. Functional analysis is tightly connected to treatment but has failed to develop testable. replicable classification systems. Functional diagnostic dimensions are suggested as a way to develop the functional classification approach, and experiential avoidance is described as 1 such dimension. A wide range of research is reviewed showing that many forms of psychopathology can be conceptualized as unhealthy efforts to escape and avoid emotions, thoughts, memories, and other private experiences. It is argued that experiential avoidance, as a functional diagnostic dimension, has the potential to integrate the efforts and findings of researchers from a wide variety of theoretical paradigms, research interests, and clinical domains and to lead to testable new approaches to the analysis and treatment of behavioral disorders. Steven C. Haves, Kelly G. Wilson, Elizabeth V. Gifford, and Victoria M. Follette. Department of Psychology. University of Nevada: Kirk Strosahl, Mental Health Center, Group Health Cooperative, Seattle, Washington. Preparation of this article was supported in part by Grant DA08634 from the National Institute on Drug Abuse. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, Mailstop 296, College of Arts and Science. University of Nevada, Reno, Nevada 89557-0062. The process of classification lies at the root of all scientific behavior. It is literally impossible to speak about a truly unique event, alone and cut off from all others, because words themselves are means of categorization (Brunei, Goodnow, & Austin, 1956). Science is concerned with refined and systematic verbal formulations of events and relations among events. Because \"events\" are always classes of events, and \"relations\" are always classes of relations, classification is one of the central tasks of science. The field of psychopathology has seen myriad classification systems (Hersen & Bellack, 1988; Sprock & Blashfield, 1991). The differences among some of these approaches are both long-standing and relatively unchanging, in part because systems are never free from a priori assumptions and guiding principles that provide a framework for organizing information (Adams & Cassidy, 1993). In the present article, we briefly examine the differences between two core classification strategies in psychopathology syndromal and functional. We then articulate one possible functional diagnostic dimension: experiential avoidance. Several common syndromal categories are examined to see how this dimension can organize data found among topographical groupings. Finally, the utility and implications of this functional dimensional category are examined. Comparing Syndromal and Functional Classification Although there are many purposes to diagnostic classification, most researchers seem to agree that the ultimate goal is the development of classes, dimensions, or relational categories that can be empirically wedded to treatment strategies (Adams & Cassidy, 1993: Hayes, Nelson & Jarrett, 1987: Meehl, 1959). Syndromal classification – whether dimensional or categorical – can be traced back to Wundt and Galen and, thus, is as old as scientific psychology itself (Eysenck, 1986). Syndromal classification starts with constellations of signs and symptoms to identify the disease entities that are presumed to give rise to these constellations. Syndromal classification thus starts with structure and, it is hoped, ends with utility. The attempt in functional classification, conversely, is to start with utility by identifying functional processes with clear treatment implications. It then works backward and returns to the issue of identifiable signs and symptoms that reflect these processes. These differences are fundamental. Syndromal Classification The economic and political dominance of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (e.g., 4th ed.; DSM -IV; American Psychiatric Association, 1994) has lead to a worldwide adoption of syndromal classification as an analytic strategy in psychopathology. The only widely used alternative, the International Classification of Diseases (ICD) system, was a source document for the original DSM, and continuous efforts have been made to ensure their ongoing compatibility (American Psychiatric Association 1994). The immediate goal of syndromal classification (Foulds. 1971) is to identify collections of signs (what one sees) and symptoms (what the client's complaint is). The hope is that these syndromes will lead to the identification of disorders with a known etiology, course, and response to treatment. When this has been achieved, we are no longer speaking of syndromes but of diseases. Because the construct of disease involves etiology and response to treatment, these classifications are ultimately a kind of functional unit. Thus, the syndromal classification approach is a topographically oriented classification strategy for the identification of functional units of abnormal behavior. When the same topographical outcome can be established by diverse processes, or when very different topographical outcomes can come from the same process, the syndromal model has a difficult time actually producing its intended functional units (cf. Bandura, 1982; Meehl, 1978). Some medical problems (e.g., cancer) have these features, and in these areas medical researchers no longer look to syndromal classification as a quick route to an understanding of the disease processes involved. The link between syndromes (topography of signs and symptoms) and diseases (function) has been notably weak in psychopathology. After over 100 years of effort, almost no psychological diseases have been clearly identified. With the exception of general paresis and a few clearly neurological disorders, psychiatric syndromes have remained syndromes indefinitely. In the absence of progress toward true functional entities, syndromal classification of psychopathology has several down sides. Symptoms are virtually non-falsifiable, because they depend only on certain formal features. Syndromal categories tend to evolve changing their names frequently and splitting into ever finer subcategories but except for political reasons (e.g., homosexuality as a disorder) they rarely simply disappear. As a result, the number of syndromes within the DSM system has increased exponentially (Follette, Houts, & Hayes, 1992). Increasingly refined topographical distinctions can always be made without the restraining and synthesizing effect of the identification of common etiological processes. In physical medicine, syndromes regularly disappear into disease categories. A wide variety of symptoms can be caused by a single disease, or a common symptom can be explained by very different diseases entities. For example, \"headaches\" are not a disease, because they could be due to influenza, vision problems, ruptured blood vessels, or a host of other factors. These etiological factors have very different treatment implications. Note that the reliability of symptom detection is not what is at issue. Reliably diagnosing headaches does not translate into reliably diagnosing the underlying functional entity, which after all is the crucial factor for treatment decisions. In the same way, the increasing reliability of DSM diagnoses is of little consolation in and of itself. The DSM system specifically eschews the primary importance of functional processes: \"The approach taken in DSM-III is atheoretical with regard to etiology or patho-physiological process\" (American Psychiatric Association, 1980, p. 7). This spirit of etiological agnosticism is carried forward in the most recent DSM incarnation. It is meant to encourage users from widely varying schools of psychology to use the same classification system. Although integration is a laudable goal, the price paid may have been too high (Follette & Hayes, 1992). For example, the link between syndromal categories and biological markers or change processes has been consistently disappointing. To date, compellingly sensitive and specific physiological markers have not been identified for any psychiatric syndrome (Hoes, 1986). Similarly, the link between syndromes and differential treatment has long been known to be weak (see Hayes et al., 1987). We still do not have compelling evidence that syndromal classification contributes substantially to treatment outcome (Hayes et al., 1987). Even in those few instances and not others, mechanisms of change are often unclear of unexamined (Follette, 1995), in part because syndromal categories give researchers few leads about where even to look. Without attention to etiology, treatment utility, and pathological process, the current syndromal system seems unlikely to evolve rapidly into a functional, theoretically relevant system. Functional Classification In a functional approach to classification, the topographical characteristics of any particular individual's behavior is not the basis for classification; instead, behaviors and sets of behaviors are organized by the functional processes that are thought to have produced and maintained them. This functional method is inherently less direct and naive than a syndromal approach, as it requires the application of pre-existing information about psychological processes to specific response forms. It thus integrates at least rudimentary forms of theory into the classification strategy, in sharp contrast with the atheoretical goals of the DSM system. Functional Diagnostic Dimensions as a Method of Functional Classification Classical functional analysis is the most dominant example of a functional classification system. It consists of six steps (Hayes & Follette, 1992) -Step 1: identify potentially relevant characterist",
"title": ""
},
{
"docid": "neg:1840141_4",
"text": "It is basically a solved problem for a server to authenticate itself to a client using standard methods of Public Key cryptography. The Public Key Infrastructure (PKI) supports the SSL protocol which in turn enables this functionality. The single-point-of-failure in PKI, and hence the focus of attacks, is the Certi cation Authority. However this entity is commonly o -line, well defended, and not easily got at. For a client to authenticate itself to the server is much more problematical. The simplest and most common mechanism is Username/Password. Although not at all satisfactory, the only onus on the client is to generate and remember a password and the reality is that we cannot expect a client to be su ciently sophisticated or well organised to protect larger secrets. However Username/Password as a mechanism is breaking down. So-called zero-day attacks on servers commonly recover les containing information related to passwords, and unless the passwords are of su ciently high entropy they will be found. The commonly applied patch is to insist that clients adopt long, complex, hard-to-remember passwords. This is essentially a second line of defence imposed on the client to protect them in the (increasingly likely) event that the authentication server will be successfully hacked. Note that in an ideal world a client should be able to use a low entropy password, as a server can limit the number of attempts the client can make to authenticate itself. The often proposed alternative is the adoption of multifactor authentication. In the simplest case the client must demonstrate possession of both a token and a password. The banks have been to the forefront of adopting such methods, but the token is invariably a physical device of some kind. Cryptography's embarrassing secret is that to date no completely satisfactory means has been discovered to implement two-factor authentication entirely in software. In this paper we propose such a scheme.",
"title": ""
},
{
"docid": "neg:1840141_5",
"text": "Extraction-transformation-loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, their cleansing, customization and insertion into a data warehouse. Usually, these processes must be completed in a certain time window; thus, it is necessary to optimize their execution time. In this paper, we delve into the logical optimization of ETL processes, modeling it as a state-space search problem. We consider each ETL workflow as a state and fabricate the state space through a set of correct state transitions. Moreover, we provide algorithms towards the minimization of the execution cost of an ETL workflow.",
"title": ""
},
{
"docid": "neg:1840141_6",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "neg:1840141_7",
"text": "In 2008, financial tsunami started to impair the economic development of many countries, including Taiwan. The prediction of financial crisis turns to be much more important and doubtlessly holds public attention when the world economy goes to depression. This study examined the predictive ability of the four most commonly used financial distress prediction models and thus constructed reliable failure prediction models for public industrial firms in Taiwan. Multiple discriminate analysis (MDA), logit, probit, and artificial neural networks (ANNs) methodology were employed to a dataset of matched sample of failed and non-failed Taiwan public industrial firms during 1998–2005. The final models are validated using within sample test and out-of-the-sample test, respectively. The results indicated that the probit, logit, and ANN models which used in this study achieve higher prediction accuracy and possess the ability of generalization. The probit model possesses the best and stable performance. However, if the data does not satisfy the assumptions of the statistical approach, then the ANN approach would demonstrate its advantage and achieve higher prediction accuracy. In addition, the models which used in this study achieve higher prediction accuracy and possess the ability of generalization than those of [Altman, Financial ratios—discriminant analysis and the prediction of corporate bankruptcy using capital market data, Journal of Finance 23 (4) (1968) 589–609, Ohlson, Financial ratios and the probability prediction of bankruptcy, Journal of Accounting Research 18 (1) (1980) 109–131, and Zmijewski, Methodological issues related to the estimation of financial distress prediction models, Journal of Accounting Research 22 (1984) 59–82]. In summary, the models used in this study can be used to assist investors, creditors, managers, auditors, and regulatory agencies in Taiwan to predict the probability of business failure. & 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840141_8",
"text": "Immersive technologies such as augmented reality devices are opening up a new design space for the visual analysis of data. This paper studies the potential of an augmented reality environment for the purpose of collaborative analysis of multidimensional, abstract data. We present ART, a collaborative analysis tool to visualize multidimensional data in augmented reality using an interactive, 3D parallel coordinates visualization. The visualization is anchored to a touch-sensitive tabletop, benefiting from well-established interaction techniques. The results of group-based, expert walkthroughs show that ART can facilitate immersion in the data, a fluid analysis process, and collaboration. Based on the results, we provide a set of guidelines and discuss future research areas to foster the development of immersive technologies as tools for the collaborative analysis of multidimensional data.",
"title": ""
},
{
"docid": "neg:1840141_9",
"text": "BACKGROUND\nFine particulate air pollution has been linked to cardiovascular disease, but previous studies have assessed only mortality and differences in exposure between cities. We examined the association of long-term exposure to particulate matter of less than 2.5 microm in aerodynamic diameter (PM2.5) with cardiovascular events.\n\n\nMETHODS\nWe studied 65,893 postmenopausal women without previous cardiovascular disease in 36 U.S. metropolitan areas from 1994 to 1998, with a median follow-up of 6 years. We assessed the women's exposure to air pollutants using the monitor located nearest to each woman's residence. Hazard ratios were estimated for the first cardiovascular event, adjusting for age, race or ethnic group, smoking status, educational level, household income, body-mass index, and presence or absence of diabetes, hypertension, or hypercholesterolemia.\n\n\nRESULTS\nA total of 1816 women had one or more fatal or nonfatal cardiovascular events, as confirmed by a review of medical records, including death from coronary heart disease or cerebrovascular disease, coronary revascularization, myocardial infarction, and stroke. In 2000, levels of PM2.5 exposure varied from 3.4 to 28.3 microg per cubic meter (mean, 13.5). Each increase of 10 microg per cubic meter was associated with a 24% increase in the risk of a cardiovascular event (hazard ratio, 1.24; 95% confidence interval [CI], 1.09 to 1.41) and a 76% increase in the risk of death from cardiovascular disease (hazard ratio, 1.76; 95% CI, 1.25 to 2.47). For cardiovascular events, the between-city effect appeared to be smaller than the within-city effect. The risk of cerebrovascular events was also associated with increased levels of PM2.5 (hazard ratio, 1.35; 95% CI, 1.08 to 1.68).\n\n\nCONCLUSIONS\nLong-term exposure to fine particulate air pollution is associated with the incidence of cardiovascular disease and death among postmenopausal women. Exposure differences within cities are associated with the risk of cardiovascular disease.",
"title": ""
},
{
"docid": "neg:1840141_10",
"text": "Power modulators for compact, repetitive systems are continually faced with new requirements as the corresponding system objectives increase. Changes in pulse rate frequency or number of pulses significantly impact the design of the power conditioning system. In order to meet future power supply requirements, we have developed several high voltage (HV) capacitor charging power supplies (CCPS). This effort focuses on a volume of 6\" x 6\" x 14\" and a weight of 25 lbs. The primary focus was to increase the effective capacitor charge rate, or power output, for the given size and weight. Although increased power output was the principal objective, efficiency and repeatability were also considered. A number of DC-DC converter topologies were compared to determine the optimal design. In order to push the limits of output power, numerous resonant converter parameters were examined. Comparisons of numerous topologies, HV transformers and rectifiers, and switching frequency ranges are presented. The impacts of the control system and integration requirements are also considered.",
"title": ""
},
{
"docid": "neg:1840141_11",
"text": "Human alteration of the global environment has triggered the sixth major extinction event in the history of life and caused widespread changes in the global distribution of organisms. These changes in biodiversity alter ecosystem processes and change the resilience of ecosystems to environmental change. This has profound consequences for services that humans derive from ecosystems. The large ecological and societal consequences of changing biodiversity should be minimized to preserve options for future solutions to global environmental problems.",
"title": ""
},
{
"docid": "neg:1840141_12",
"text": "The hardware implementation of deep neural networks (DNNs) has recently received tremendous attention since many applications require high-speed operations. However, numerous processing elements and complex interconnections are usually required, leading to a large area occupation and a high power consumption. Stochastic computing has shown promising results for area-efficient hardware implementations, even though existing stochastic algorithms require long streams that exhibit long latency. In this paper, we propose an integer form of stochastic computation and introduce some elementary circuits. We then propose an efficient implementation of a DNN based on integral stochastic computing. The proposed architecture uses integer stochastic streams and a modified Finite State Machine-based tanh function to improve the performance and reduce the latency compared to existing stochastic architectures for DNN. The simulation results show the negligible performance loss of the proposed integer stochastic DNN for different network sizes compared to their floating point versions.",
"title": ""
},
{
"docid": "neg:1840141_13",
"text": "Gesture is becoming an increasingly popular means of interacting with computers. However, it is still relatively costly to deploy robust gesture recognition sensors in existing mobile platforms. We present SoundWave, a technique that leverages the speaker and microphone already embedded in most commodity devices to sense in-air gestures around the device. To do this, we generate an inaudible tone, which gets frequency-shifted when it reflects off moving objects like the hand. We measure this shift with the microphone to infer various gestures. In this note, we describe the phenomena and detection algorithm, demonstrate a variety of gestures, and present an informal evaluation on the robustness of this approach across different devices and people.",
"title": ""
},
{
"docid": "neg:1840141_14",
"text": "Tricaine methanesulfonate (TMS) is an anesthetic that is approved for provisional use in some jurisdictions such as the United States, Canada, and the United Kingdom (UK). Many hatcheries and research studies use TMS to immobilize fish for marking or transport and to suppress sensory systems during invasive procedures. Improper TMS use can decrease fish viability, distort physiological data, or result in mortalities. Because animals may be anesthetized by junior staff or students who may have little experience in fish anesthesia, training in the proper use of TMS may decrease variability in recovery, experimental results and increase fish survival. This document acts as a primer on the use of TMS for anesthetizing juvenile salmonids, with an emphasis on its use in surgical applications. Within, we briefly describe many aspects of TMS including the legal uses for TMS, and what is currently known about the proper storage and preparation of the anesthetic. We outline methods and precautions for administration and changes in fish behavior during progressively deeper anesthesia and discuss the physiological effects of TMS and its potential for compromising fish health. Despite the challenges of working with TMS, it is currently one of the few legal options available in the USA and in other countries until other anesthetics are approved and is an important tool for the intracoelomic implantation of electronic tags in fish.",
"title": ""
},
{
"docid": "neg:1840141_15",
"text": "This paper reports on the results of a survey of user interface programming. The survey was widely distributed, and we received 74 responses. The results show that in today's applications, an average of 48% of the code is devoted to the user interface portion. The average time spent on the user interface portion is 45% during the design phase, 50% during the implementation phase, and 37% during the maintenance phase. 34% of the systems were implemented using a toolkit, 27% used a UIMS, 14% used an interface builder, and 26% used no tools. This appears to be because the toolkit systems had more sophisticated user interfaces. The projects using UIMSs or interface builders spent the least percent of time and code on the user interface (around 41%) suggesting that these tools are effective. In general, people were happy with the tools they used, especially the graphical interface builders. The most common problems people reported when developing a user interface included getting users' requirements, writing help text, achieving consistency, learning how to use the tools, getting acceptable performance, and communicating among various parts of the program.",
"title": ""
},
{
"docid": "neg:1840141_16",
"text": "Parametricism has come to scene as an important style in both architectural design and construction where conventional Computer-Aided Design (CAD) tool has become substandard. Building Information Modeling (BIM) is a recent object-based parametric modeling tool for exploring the relationship between the geometric and non-geometric components of the model. The aim of this research is to explore the capabilities of BIM in achieving variety and flexibility in design extending from architectural to urban scale. This study proposes a method by using User Interface (UI) and Application Programming Interface (API) tools of BIM to generate a complex roof structure as a parametric family. This project demonstrates a dynamic variety in architectural scale. We hypothesized that if a function calculating the roof length is defined using a variety of inputs, it can later be applied to urban scale by utilizing a database of the inputs.",
"title": ""
},
{
"docid": "neg:1840141_17",
"text": "Transaction traces analysis is a key utility for marketing, trend monitoring, and fraud detection purposes. However, they can also be used for designing and verification of contextual risk management systems for card-present transactions. In this paper, we presented a novel approach to collect detailed transaction traces directly from payment terminal. Thanks to that, it is possible to analyze each transaction step precisely, including its frequency and timing. We also demonstrated our approach to analyze such data based on real-life experiment. Finally, we concluded this paper with important findings for designers of such a system.",
"title": ""
},
{
"docid": "neg:1840141_18",
"text": "Many fish populations have both resident and migratory individuals. Migrants usually grow larger and have higher reproductive potential but lower survival than resident conspecifics. The ‘decision’ about migration versus residence probably depends on the individual growth rate, or a physiological process like metabolic rate which is correlated with growth rate. Fish usually mature as their somatic growth levels off, where energetic costs of maintenance approach energetic intake. After maturation, growth also stagnates because of resource allocation to reproduction. Instead of maturation, however, fish may move to an alternative feeding habitat and their fitness may thereby be increased. When doing so, maturity is usually delayed, either to the new asymptotic length, or sooner, if that gives higher expected fitness. Females often dominate among migrants and males among residents. The reason is probably that females maximize their fitness by growing larger, because their reproductive success generally increases exponentially with body size. Males, on the other hand, may maximize fitness by alternative life histories, e.g. fighting versus sneaking, as in many salmonid species where small residents are the sneakers and large migrants the fighters. Partial migration appears to be partly developmental, depending on environmental conditions, and partly genetic, inherited as a quantitative trait influenced by a number of genes.",
"title": ""
}
] |
1840142 | DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker | [
{
"docid": "pos:1840142_0",
"text": "In the field of computational game theory, games are often compared in terms of their size. This can be measured in several ways, including the number of unique game states, the number of decision points, and the total number of legal actions over all decision points. These numbers are either known or estimated for a wide range of classic games such as chess and checkers. In the stochastic and imperfect information game of poker, these sizes are easily computed in “limit” games which restrict the players’ available actions, but until now had only been estimated for the more complicated “no-limit” variants. In this paper, we describe a simple algorithm for quickly computing the size of two-player no-limit poker games, provide an implementation of this algorithm, and present for the first time precise counts of the number of game states, information sets, actions and terminal nodes in the no-limit poker games played in the Annual Computer Poker Competition.",
"title": ""
}
] | [
{
"docid": "neg:1840142_0",
"text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.",
"title": ""
},
{
"docid": "neg:1840142_1",
"text": "We are interested in counting the number of instances of object classes in natural, everyday images. Previous counting approaches tackle the problem in restricted domains such as counting pedestrians in surveillance videos. Counts can also be estimated from outputs of other vision tasks like object detection. In this work, we build dedicated models for counting designed to tackle the large variance in counts, appearances, and scales of objects found in natural scenes. Our approach is inspired by the phenomenon of subitizing – the ability of humans to make quick assessments of counts given a perceptual signal, for small count values. Given a natural scene, we employ a divide and conquer strategy while incorporating context across the scene to adapt the subitizing idea to counting. Our approach offers consistent improvements over numerous baseline approaches for counting on the PASCAL VOC 2007 and COCO datasets. Subsequently, we study how counting can be used to improve object detection. We then show a proof of concept application of our counting methods to the task of Visual Question Answering, by studying the how many? questions in the VQA and COCO-QA datasets.",
"title": ""
},
{
"docid": "neg:1840142_2",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nBaphicacanthus cusia root also names \"Nan Ban Lan Gen\" has been traditionally used to prevent and treat influenza A virus infections. Here, we identified a peptide derivative, aurantiamide acetate (compound E17), as an active compound in extracts of B. cusia root. Although studies have shown that aurantiamide acetate possesses antioxidant and anti-inflammatory properties, the effects and mechanism by which it functions as an anti-viral or as an anti-inflammatory during influenza virus infection are poorly defined. Here we investigated the anti-viral activity and possible mechanism of compound E17 against influenza virus infection.\n\n\nMATERIALS AND METHODS\nThe anti-viral activity of compound E17 against Influenza A virus (IAV) was determined using the cytopathic effect (CPE) inhibition assay. Viruses were titrated on Madin-Darby canine kidney (MDCK) cells by plaque assays. Ribonucleoprotein (RNP) luciferase reporter assay was further conducted to investigate the effect of compound E17 on the activity of the viral polymerase complex. HEK293T cells with a stably transfected NF-κB luciferase reporter plasmid were employed to examine the activity of compound E17 on NF-κB activation. Activation of the host signaling pathway induced by IAV infection in the absence or presence of compound E17 was assessed by western blotting. The effect of compound E17 on IAV-induced expression of pro-inflammatory cytokines was measured by real-time quantitative PCR and Luminex assays.\n\n\nRESULTS\nCompound E17 exerted an inhibitory effect on IAV replication in MDCK cells but had no effect on avian IAV and influenza B virus. Treatment with compound E17 resulted in a reduction of RNP activity and virus titers. Compound E17 treatment inhibited the transcriptional activity of NF-κB in a NF-κB luciferase reporter stable HEK293 cell after stimulation with TNF-α. Furthermore, compound E17 blocked the activation of the NF-κB signaling pathway and decreased mRNA expression levels of pro-inflammatory genes in infected cells. Compound E17 also suppressed the production of IL-6, TNF-α, IL-8, IP-10 and RANTES from IAV-infected lung epithelial (A549) cells.\n\n\nCONCLUSIONS\nThese results indicate that compound E17 isolated from B. cusia root has potent anti-viral and anti-inflammatory effects on IAV-infected cells via inhibition of the NF-κB pathway. Therefore, compound E17 could be a potential therapeutic agent for the treatment of influenza.",
"title": ""
},
{
"docid": "neg:1840142_3",
"text": "Increasing needs in efficient storage management and better utilization of network bandwidth with less data transfer have led the computing community to consider data compression as a solution. However, compression introduces extra overhead and performance can suffer. The key elements in making the decision to use compression are execution time and compression ratio. Due to negative performance impact, compression is often neglected. General purpose computing on graphic processing units (GPUs) introduces new opportunities where parallelism is available. Our work targets the use of opportunities in GPU based systems by exploiting parallelism in compression algorithms. In this paper we present an implementation of the Lempel-Ziv-Storer-Szymanski (LZSS) loss less data compression algorithm by using NVIDIA GPUs Compute Unified Device Architecture (CUDA) Framework. Our implementation of the LZSS algorithm on GPUs significantly improves the performance of the compression process compared to CPU based implementation without any loss in compression ratio. This can support GPU based clusters in solving application bandwidth problems. Our system outperforms the serial CPU LZSS implementation by up to 18x, the parallel threaded version up to 3x and the BZIP2 program by up to 6x in terms of compression time, showing the promise of CUDA systems in loss less data compression. To give the programmers an easy to use tool, our work also provides an API for in memory compression without the need for reading from and writing to files, in addition to the version involving I/O.",
"title": ""
},
{
"docid": "neg:1840142_4",
"text": "Assess extensor carpi ulnaris (ECU) tendon position in the ulnar groove, determine the frequency of tendon “dislocation” with the forearm prone, neutral, and supine, and determine if an association exists between ulnar groove morphology and tendon position in asymptomatic volunteers. Axial proton density-weighted MR was performed through the distal radioulnar joint with the forearm prone, neutral, and supine in 38 asymptomatic wrists. The percentage of the tendon located beyond the ulnar-most border of the ulnar groove was recorded. Ulnar groove depth and length was measured and ECU tendon signal was assessed. 15.8 % of tendons remained within the groove in all forearm positions. In 76.3 %, the tendon translated medially from prone to supine. The tendon “dislocated” in 0, 10.5, and 39.5 % with the forearm prone, neutral and supine, respectively. In 7.9 % prone, 5.3 % neutral, and 10.5 % supine exams, the tendon was 51–99 % beyond the ulnar border of the ulnar groove. Mean ulnar groove depth and length were 1.6 and 7.7 mm, respectively, with an overall trend towards greater degrees of tendon translation in shorter, shallower ulnar grooves. The ECU tendon shifts in a medial direction when the forearm is supine; however, tendon “dislocation” has not been previously documented in asymptomatic volunteers. The ECU tendon medially translated or frankly dislocated from the ulnar groove in the majority of our asymptomatic volunteers, particularly when the forearm is supine. Overall greater degrees of tendon translation were observed in shorter and shallower ulnar grooves.",
"title": ""
},
{
"docid": "neg:1840142_5",
"text": "Even though considerable attention has been given to the polarity of words (positive and negative) and the creation of large polarity lexicons, research in emotion analysis has had to rely on limited and small emotion lexicons. In this paper we show how the combined strength and wisdom of the crowds can be used to generate a large, high-quality, word–emotion and word–polarity association lexicon quickly and inexpensively. We enumerate the challenges in emotion annotation in a crowdsourcing scenario and propose solutions to address them. Most notably, in addition to questions about emotions associated with terms, we show how the inclusion of a word choice question can discourage malicious data entry, help identify instances where the annotator may not be familiar with the target term (allowing us to reject such annotations), and help obtain annotations at sense level (rather than at word level). We conducted experiments on how to formulate the emotionannotation questions, and show that asking if a term is associated with an emotion leads to markedly higher inter-annotator agreement than that obtained by asking if a term evokes an emotion.",
"title": ""
},
{
"docid": "neg:1840142_6",
"text": "Melanoma mortality rates are the highest amongst skin cancer patients. Melanoma is life threating when it grows beyond the dermis of the skin. Hence, depth is an important factor to diagnose melanoma. This paper introduces a non-invasive computerized dermoscopy system that considers the estimated depth of skin lesions for diagnosis. A 3-D skin lesion reconstruction technique using the estimated depth obtained from regular dermoscopic images is presented. On basis of the 3-D reconstruction, depth and 3-D shape features are extracted. In addition to 3-D features, regular color, texture, and 2-D shape features are also extracted. Feature extraction is critical to achieve accurate results. Apart from melanoma, in-situ melanoma the proposed system is designed to diagnose basal cell carcinoma, blue nevus, dermatofibroma, haemangioma, seborrhoeic keratosis, and normal mole lesions. For experimental evaluations, the PH2, ISIC: Melanoma Project, and ATLAS dermoscopy data sets is considered. Different feature set combinations is considered and performance is evaluated. Significant performance improvement is reported the post inclusion of estimated depth and 3-D features. The good classification scores of sensitivity = 96%, specificity = 97% on PH2 data set and sensitivity = 98%, specificity = 99% on the ATLAS data set is achieved. Experiments conducted to estimate tumor depth from 3-D lesion reconstruction is presented. Experimental results achieved prove that the proposed computerized dermoscopy system is efficient and can be used to diagnose varied skin lesion dermoscopy images.",
"title": ""
},
{
"docid": "neg:1840142_7",
"text": "TrustZone-based Real-time Kernel Protection (TZ-RKP) is a novel system that provides real-time protection of the OS kernel using the ARM TrustZone secure world. TZ-RKP is more secure than current approaches that use hypervisors to host kernel protection tools. Although hypervisors provide privilege and isolation, they face fundamental security challenges due to their growing complexity and code size. TZ-RKP puts its security monitor, which represents its entire Trusted Computing Base (TCB), in the TrustZone secure world; a safe isolated environment that is dedicated to security services. Hence, the security monitor is safe from attacks that can potentially compromise the kernel, which runs in the normal world. Using the secure world for kernel protection has been crippled by the lack of control over targets that run in the normal world. TZ-RKP solves this prominent challenge using novel techniques that deprive the normal world from the ability to control certain privileged system functions. These functions are forced to route through the secure world for inspection and approval before being executed. TZ-RKP's control of the normal world is non-bypassable. It can effectively stop attacks that aim at modifying or injecting kernel binaries. It can also stop attacks that involve modifying the system memory layout, e.g, through memory double mapping. This paper presents the implementation and evaluation of TZ-RKP, which has gone through rigorous and thorough evaluation of effectiveness and performance. It is currently deployed on the latest models of the Samsung Galaxy series smart phones and tablets, which clearly demonstrates that it is a practical real-world system.",
"title": ""
},
{
"docid": "neg:1840142_8",
"text": "This paper presents the impact of automatic feature extraction used in a deep learning architecture such as Convolutional Neural Network (CNN). Recently CNN has become a very popular tool for image classification which can automatically extract features, learn and classify them. It is a common belief that CNN can always perform better than other well-known classifiers. However, there is no systematic study which shows that automatic feature extraction in CNN is any better than other simple feature extraction techniques, and there is no study which shows that other simple neural network architectures cannot achieve same accuracy as CNN. In this paper, a systematic study to investigate CNN's feature extraction is presented. CNN with automatic feature extraction is firstly evaluated on a number of benchmark datasets and then a simple traditional Multi-Layer Perceptron (MLP) with full image, and manual feature extraction are evaluated on the same benchmark datasets. The purpose is to see whether feature extraction in CNN performs any better than a simple feature with MLP and full image with MLP. Many experiments were systematically conducted by varying number of epochs and hidden neurons. The experimental results revealed that traditional MLP with suitable parameters can perform as good as CNN or better in certain cases.",
"title": ""
},
{
"docid": "neg:1840142_9",
"text": "In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults.",
"title": ""
},
{
"docid": "neg:1840142_10",
"text": "V marketing is a form of peer-to-peer communication in which individuals are encouraged to pass on promotional messages within their social networks. Conventional wisdom holds that the viral marketing process is both random and unmanageable. In this paper, we deconstruct the process and investigate the formation of the activated digital network as distinct from the underlying social network. We then consider the impact of the social structure of digital networks (random, scale free, and small world) and of the transmission behavior of individuals on campaign performance. Specifically, we identify alternative social network models to understand the mediating effects of the social structures of these models on viral marketing campaigns. Next, we analyse an actual viral marketing campaign and use the empirical data to develop and validate a computer simulation model for viral marketing. Finally, we conduct a number of simulation experiments to predict the spread of a viral message within different types of social network structures under different assumptions and scenarios. Our findings confirm that the social structure of digital networks play a critical role in the spread of a viral message. Managers seeking to optimize campaign performance should give consideration to these findings before designing and implementing viral marketing campaigns. We also demonstrate how a simulation model is used to quantify the impact of campaign management inputs and how these learnings can support managerial decision making.",
"title": ""
},
{
"docid": "neg:1840142_11",
"text": "The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions (“dual dictionaries” of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.",
"title": ""
},
{
"docid": "neg:1840142_12",
"text": "Syllogisms are arguments about the properties of entities. They consist of 2 premises and a conclusion, which can each be in 1 of 4 \"moods\": All A are B, Some A are B, No A are B, and Some A are not B. Their logical analysis began with Aristotle, and their psychological investigation began over 100 years ago. This article outlines the logic of inferences about syllogisms, which includes the evaluation of the consistency of sets of assertions. It also describes the main phenomena of reasoning about properties. There are 12 extant theories of such inferences, and the article outlines each of them and describes their strengths and weaknesses. The theories are of 3 main sorts: heuristic theories that capture principles that could underlie intuitive responses, theories of deliberative reasoning based on formal rules of inference akin to those of logic, and theories of deliberative reasoning based on set-theoretic diagrams or models. The article presents a meta-analysis of these extant theories of syllogisms using data from 6 studies. None of the 12 theories provides an adequate account, and so the article concludes with a guide-based on its qualitative and quantitative analyses-of how best to make progress toward a satisfactory theory.",
"title": ""
},
{
"docid": "neg:1840142_13",
"text": "The Internet of Things (IoT) is a vision which real-world objects are part of the internet. Every object is uniquely identified, and accessible to the network. There are various types of communication protocol for connect the device to the Internet. One of them is a Low Power Wide Area Network (LPWAN) which is a novel technology use to implement IoT applications. There are many platforms of LPWAN such as NB-IoT, LoRaWAN. In this paper, the experimental performance evaluation of LoRaWAN over a real environment in Bangkok, Thailand is presented. From these experimental results, the communication ranges in both an outdoor and an indoor environment are limited. Hence, the IoT application with LoRaWAN technology can be reliable in limited of communication ranges.",
"title": ""
},
{
"docid": "neg:1840142_14",
"text": "Fairness has emerged as an important category of analysis for machine learning systems in some application areas. In extending the concept of fairness to recommender systems, there is an essential tension between the goals of fairness and those of personalization. However, there are contexts in which equity across recommendation outcomes is a desirable goal. It is also the case that in some applications fairness may be a multisided concept, in which the impacts on multiple groups of individuals must be considered. In this paper, we examine two different cases of fairness-aware recommender systems: consumer-centered and provider-centered. We explore the concept of a balanced neighborhood as a mechanism to preserve personalization in recommendation while enhancing the fairness of recommendation outcomes. We show that a modified version of the Sparse Linear Method (SLIM) can be used to improve the balance of user and item neighborhoods, with the result of achieving greater outcome fairness in real-world datasets with minimal loss in ranking performance.",
"title": ""
},
{
"docid": "neg:1840142_15",
"text": "Why do some new technologies emerge and quickly supplant incumbent technologies while others take years or decades to take off? We explore this question by presenting a framework that considers both the focal competing technologies as well as the ecosystems in which they are embedded. Within our framework, each episode of technology transition is characterized by the ecosystem emergence challenge that confronts the new technology and the ecosystem extension opportunity that is available to the old technology. We identify four qualitatively distinct regimes with clear predictions for the pace of substitution. Evidence from 10 episodes of technology transitions in the semiconductor lithography equipment industry from 1972 to 2009 offers strong support for our framework. We discuss the implication of our approach for firm strategy. Disciplines Management Sciences and Quantitative Methods This journal article is available at ScholarlyCommons: https://repository.upenn.edu/mgmt_papers/179 Innovation Ecosystems and the Pace of Substitution: Re-examining Technology S-curves Ron Adner Tuck School of Business, Dartmouth College Strategy and Management 100 Tuck Hall Hanover, NH 03755, USA Tel: 1 603 646 9185 Email:\t\r ron.adner@dartmouth.edu Rahul Kapoor The Wharton School University of Pennsylvania Philadelphia, PA-19104 Tel : 1 215 898 6458 Email: kapoorr@wharton.upenn.edu",
"title": ""
},
{
"docid": "neg:1840142_16",
"text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.",
"title": ""
},
{
"docid": "neg:1840142_17",
"text": "The most common approach in text mining classification tasks is to rely on features like words, part-of-speech tags, stems, or some other high-level linguistic features. Unlike the common approach, we present a method that uses only character p-grams (also known as n-grams) as features for the Arabic Dialect Identification (ADI) Closed Shared Task of the DSL 2016 Challenge. The proposed approach combines several string kernels using multiple kernel learning. In the learning stage, we try both Kernel Discriminant Analysis (KDA) and Kernel Ridge Regression (KRR), and we choose KDA as it gives better results in a 10-fold cross-validation carried out on the training set. Our approach is shallow and simple, but the empirical results obtained in the ADI Shared Task prove that it achieves very good results. Indeed, we ranked on the second place with an accuracy of 50.91% and a weighted F1 score of 51.31%. We also present improved results in this paper, which we obtained after the competition ended. Simply by adding more regularization into our model to make it more suitable for test data that comes from a different distribution than training data, we obtain an accuracy of 51.82% and a weighted F1 score of 52.18%. Furthermore, the proposed approach has an important advantage in that it is language independent and linguistic theory neutral, as it does not require any NLP tools.",
"title": ""
},
{
"docid": "neg:1840142_18",
"text": "The component \"thing\" of the Internet of Things does not yet exist in current business process modeling standards. The \"thing\" is the essential and central concept of the Internet of Things, and without its consideration we will not be able to model the business processes of the future, which will be able to measure or change states of objects in our real-world environment. The presented approach focuses on integrating the concept of the Internet of Things into the meta-model of the process modeling standard BPMN 2.0 as standard-conform as possible. By a terminological and conceptual delimitation, three components of the standard are examined and compared towards a possible expansion. By implementing the most appropriate solution, the new thing concept becomes usable for modelers, both as a graphical and machine-readable element.",
"title": ""
},
{
"docid": "neg:1840142_19",
"text": "sists of two excitation laser beams. One beam scans the volume of the brain from the side of a horizontally positioned zebrafish but is rapidly switched off when inside an elliptical exclusion region located over the eye (Fig. 1b). Simultaneously, a second beam scans from the front, to cover the forebrain and the regions between the eyes. Together, these two beams achieve nearly complete coverage of the brain without exposing the retina to direct laser excitation, which allows unimpeded presentation of visual stimuli that are projected onto a screen below the fish. To monitor intended swimming behavior, we used existing methods for recording activity from motor neuron axons in the tail of paralyzed larval zebrafish1 (Fig. 1a and Supplementary Note). This system provides imaging speeds of up to three brain volumes per second (40 planes per brain volume); increases in camera speed will allow for faster volumetric sampling. Because light-sheet imaging may still introduce some additional sensory stimulation (excitation light scattering in the brain and reflected from the glass walls of the chamber), we assessed whether fictive behavior in 5–7 d post-fertilization (d.p.f.) fish was robust to the presence of the light sheets. We tested two visuoLight-sheet functional imaging in fictively behaving zebrafish",
"title": ""
}
] |
1840143 | Fast Image Inpainting Based on Coherence Transport | [
{
"docid": "pos:1840143_0",
"text": "Shock filters are based in the idea to apply locally either a dilation or an erosion process, depending on whether the pixel belongs to the influence zone of a maximum or a minimum. They create a sharp shock between two influence zones and produce piecewise constant segmentations. In this paper we design specific shock filters for the enhancement of coherent flow-like structures. They are based on the idea to combine shock filtering with the robust orientation estimation by means of the structure tensor. Experiments with greyscale and colour images show that these novel filters may outperform previous shock filters as well as coherence-enhancing diffusion filters.",
"title": ""
}
] | [
{
"docid": "neg:1840143_0",
"text": "We present an augmented reality application for mechanics education. It utilizes a recent physics engine developed for the PC gaming market to simulate physical experiments in the domain of mechanics in real time. Students are enabled to actively build own experiments and study them in a three-dimensional virtual world. A variety of tools are provided to analyze forces, mass, paths and other properties of objects before, during and after experiments. Innovative teaching content is presented that exploits the strengths of our immersive virtual environment. PhysicsPlayground serves as an example of how current technologies can be combined to deliver a new quality in physics education.",
"title": ""
},
{
"docid": "neg:1840143_1",
"text": "The mobile social network (MSN) combines techniques in social science and wireless communications for mobile networking. The MSN can be considered as a system which provides a variety of data delivery services involving the social relationship among mobile users. This paper presents a comprehensive survey on the MSN specifically from the perspectives of applications, network architectures, and protocol design issues. First, major applications of the MSN are reviewed. Next, different architectures of the MSN are presented. Each of these different architectures supports different data delivery scenarios. The unique characteristics of social relationship in MSN give rise to different protocol design issues. These research issues (e.g., community detection, mobility, content distribution, content sharing protocols, and privacy) and the related approaches to address data delivery in the MSN are described. At the end, several important research directions are outlined.",
"title": ""
},
{
"docid": "neg:1840143_2",
"text": "Co-contamination of the environment with toxic chlorinated organic and heavy metal pollutants is one of the major problems facing industrialized nations today. Heavy metals may inhibit biodegradation of chlorinated organics by interacting with enzymes directly involved in biodegradation or those involved in general metabolism. Predictions of metal toxicity effects on organic pollutant biodegradation in co-contaminated soil and water environments is difficult since heavy metals may be present in a variety of chemical and physical forms. Recent advances in bioremediation of co-contaminated environments have focussed on the use of metal-resistant bacteria (cell and gene bioaugmentation), treatment amendments, clay minerals and chelating agents to reduce bioavailable heavy metal concentrations. Phytoremediation has also shown promise as an emerging alternative clean-up technology for co-contaminated environments. However, despite various investigations, in both aerobic and anaerobic systems, demonstrating that metal toxicity hampers the biodegradation of the organic component, a paucity of information exists in this area of research. Therefore, in this review, we discuss the problems associated with the degradation of chlorinated organics in co-contaminated environments, owing to metal toxicity and shed light on possible improvement strategies for effective bioremediation of sites co-contaminated with chlorinated organic compounds and heavy metals.",
"title": ""
},
{
"docid": "neg:1840143_3",
"text": "With the increasing adoption of NoSQL data base systems like MongoDB or CouchDB more and more applications store structured data according to a non-relational, document oriented model. Exposing this structured data as Linked Data is currently inhibited by a lack of standards as well as tools and requires the implementation of custom solutions. While recent efforts aim at expressing transformations of such data models into RDF in a standardized manner, there is a lack of approaches which facilitate SPARQL execution over mapped non-relational data sources. With SparqlMap-M we show how dynamic SPARQL access to non-relational data can be achieved. SparqlMap-M is an extension to our SPARQL-to-SQL rewriter SparqlMap that performs a (partial) transformation of SPARQL queries by using a relational abstraction over a document store. Further, duplicate data in the document store is used to reduce the number of joins and custom optimizations are introduced. Our showcase scenario employs the Berlin SPARQL Benchmark (BSBM) with different adaptions to a document data model. We use this scenario to demonstrate the viability of our approach and compare it to different MongoDB setups and native SQL.",
"title": ""
},
{
"docid": "neg:1840143_4",
"text": "This paper studies the prediction of head pose from still images, and summarizes the outcome of a recently organized competition, where the task was to predict the yaw and pitch angles of an image dataset with 2790 samples with known angles. The competition received 292 entries from 52 participants, the best ones clearly exceeding the state-of-the-art accuracy. In this paper, we present the key methodologies behind selected top methods, summarize their prediction accuracy and compare with the current state of the art.",
"title": ""
},
{
"docid": "neg:1840143_5",
"text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.",
"title": ""
},
{
"docid": "neg:1840143_6",
"text": "Patient interactions with health care providers result in entries to electronic health records (EHRs). EHRs were built for clinical and billing purposes but contain many data points about an individual. Mining these records provides opportunities to extract electronic phenotypes that can be paired with genetic data to identify genes underlying common human diseases. This task remains challenging: high quality phenotyping is costly and requires physician review; many fields in the records are sparsely filled; and our definitions of diseases are continuing to improve over time. Here we develop and evaluate a semi-supervised learning method for EHR phenotype extraction using denoising autoencoders for phenotype stratification. By combining denoising autoencoders with random forests we find classification improvements across simulation models, particularly in cases where only a small number of patients have high quality phenotype. This situation is commonly encountered in research with EHRs. Denoising autoencoders perform dimensionality reduction allowing visualization and clustering for the discovery of new subtypes of disease. This method represents a promising approach to clarify disease subtypes and improve genotype-phenotype association studies that leverage EHRs.",
"title": ""
},
{
"docid": "neg:1840143_7",
"text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.",
"title": ""
},
{
"docid": "neg:1840143_8",
"text": "Along with its numerous benefits, the Internet also created numerous ways to compromise the security and stability of the systems connected to it. In 2003, 137529 incidents were reported to CERT/CC © while in 1999, there were 9859 reported incidents (CERT/CC©, 2003). Operations, which are primarily designed to protect the availability, confidentiality and integrity of critical network information systems, are considered to be within the scope of security management. Security management operations protect computer networks against denial-of-service attacks, unauthorized disclosure of information, and the modification or destruction of data. Moreover, the automated detection and immediate reporting of these events are required in order to provide the basis for a timely response to attacks (Bass, 2000). Security management plays an important, albeit often neglected, role in network management tasks.",
"title": ""
},
{
"docid": "neg:1840143_9",
"text": "In this paper we propose a novel entity annotator for texts which hinges on TagME's algorithmic technology, currently the best one available. The novelty is twofold: from the one hand, we have engineered the software in order to be modular and more efficient; from the other hand, we have improved the annotation pipeline by re-designing all of its three main modules: spotting, disambiguation and pruning. In particular, the re-design has involved the detailed inspection of the performance of these modules by developing new algorithms which have been in turn tested over all publicly available datasets (i.e. AIDA, IITB, MSN, AQUAINT, and the one of the ERD Challenge). This extensive experimentation allowed us to derive the best combination which achieved on the ERD development dataset an F1 score of 74.8%, which turned to be 67.2% F1 for the test dataset. This final result was due to an impressive precision equal to 87.6%, but very low recall 54.5%. With respect to classic TagME on the development dataset the improvement ranged from 1% to 9% on the D2W benchmark, depending on the disambiguation algorithm being used. As a side result, the final software can be interpreted as a flexible library of several parsing/disambiguation and pruning modules that can be used to build up new and more sophisticated entity annotators. We plan to release our library to the public as an open-source project.",
"title": ""
},
{
"docid": "neg:1840143_10",
"text": "Real-time ETL and data warehouse multidimensional modeling (DMM) of business operational data has become an important research issue in the area of real-time data warehousing (RTDW). In this study, some of the recently proposed real-time ETL technologies from the perspectives of data volumes, frequency, latency, and mode have been discussed. In addition, we highlight several advantages of using semi-structured DMM (i.e. XML) in RTDW instead of traditional structured DMM (i.e., relational). We compare the two DMMs on the basis of four characteristics: heterogeneous data integration, types of measures supported, aggregate query processing, and incremental maintenance. We implemented the RTDW framework for an example telecommunication organization. Our experimental analysis shows that if the delay comes from the incremental maintenance of DMM, no ETL technology (full-reloading or incremental-loading) can help in real-time business intelligence.",
"title": ""
},
{
"docid": "neg:1840143_11",
"text": "Conveying a narrative with visualizations often requires choosing an order in which to present visualizations. While evidence exists that narrative sequencing in traditional stories can affect comprehension and memory, little is known about how sequencing choices affect narrative visualization. We consider the forms and reactions to sequencing in narrative visualization presentations to provide a deeper understanding with a focus on linear, 'slideshow-style' presentations. We conduct a qualitative analysis of 42 professional narrative visualizations to gain empirical knowledge on the forms that structure and sequence take. Based on the results of this study we propose a graph-driven approach for automatically identifying effective sequences in a set of visualizations to be presented linearly. Our approach identifies possible transitions in a visualization set and prioritizes local (visualization-to-visualization) transitions based on an objective function that minimizes the cost of transitions from the audience perspective. We conduct two studies to validate this function. We also expand the approach with additional knowledge of user preferences for different types of local transitions and the effects of global sequencing strategies on memory, preference, and comprehension. Our results include a relative ranking of types of visualization transitions by the audience perspective and support for memory and subjective rating benefits of visualization sequences that use parallelism as a structural device. We discuss how these insights can guide the design of narrative visualization and systems that support optimization of visualization sequence.",
"title": ""
},
{
"docid": "neg:1840143_12",
"text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.",
"title": ""
},
{
"docid": "neg:1840143_13",
"text": "This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent’s belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 × 10 battleship and partially observable PacMan, with approximately 10 and 10 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"title": ""
},
{
"docid": "neg:1840143_14",
"text": "Text detection in complex background images is a challenging task for intelligent vehicles. Actually, almost all the widely-used systems focus on commonly used languages while for some minority languages, such as the Uyghur language, text detection is paid less attention. In this paper, we propose an effective Uyghur language text detection system in complex background images. First, a new channel-enhanced maximally stable extremal regions (MSERs) algorithm is put forward to detect component candidates. Second, a two-layer filtering mechanism is designed to remove most non-character regions. Third, the remaining component regions are connected into short chains, and the short chains are extended by a novel extension algorithm to connect the missed MSERs. Finally, a two-layer chain elimination filter is proposed to prune the non-text chains. To evaluate the system, we build a new data set by various Uyghur texts with complex backgrounds. Extensive experimental comparisons show that our system is obviously effective for Uyghur language text detection in complex background images. The F-measure is 85%, which is much better than the state-of-the-art performance of 75.5%.",
"title": ""
},
{
"docid": "neg:1840143_15",
"text": "This is a review of unsupervised learning applied to videos with the aim of learning visual representations. We look at different realizations of the notion of temporal coherence across various models. We try to understand the challenges being faced, the strengths and weaknesses of different approaches and identify directions for future work. Unsupervised Learning of Visual Representations using Videos Nitish Srivastava Department of Computer Science, University of Toronto",
"title": ""
},
{
"docid": "neg:1840143_16",
"text": "The case study presented here, deals with the subject of second language acquisition making at the same time an effort to show as much as possible how L1 was acquired and the ways L1 affected L2, through the process of examining a Greek girl who has been exposed to the English language from the age of eight. Furthermore, I had the chance to analyze the method used by the frontistirio teachers and in what ways this method helps or negatively influences children regarding their performance in the four basic skills. We will evaluate the evidence acquired by the girl by studying briefly the basic theories provided by important figures in the field of L2. Finally, I will also include my personal suggestions and the improvement of the child’s abilities and I will state my opinion clearly.",
"title": ""
},
{
"docid": "neg:1840143_17",
"text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.",
"title": ""
},
{
"docid": "neg:1840143_18",
"text": "We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously. 1. Introduction We describe a numerical method for multidimensional scaling. In a companion paper [7] we describe the rationale for our approach to scaling, which is related to that of Shepard [9]. As the numerical methods required are largely unfamiliar to psychologists, and even have elements of novelty within the field of numerical analysis, it seems worthwhile to describe them. In [7] we suppose that there are n objects 1, · · · , n, and that we have experimental values 8;; of dissimilarity between them. For a configuration of points x1 , • • • , x .. in t:-dimensional space, with interpoint distances d;; , we defined the stress of the configuration by The stress is intendoo to be a measure of how well the configuration matches the data. More fully, it is supposed that the \"true\" dissimilarities result from some unknown monotone distortion of the interpoint distances of some \"true\" configuration, and that the observed dissimilarities differ from the true dissimilarities only because of random fluctuation. The stress is essentially the root-mean-square residual departure from this hypothesis. By definition, the best-fitting configuration in t-dimensional space, for a fixed value of t, is that configuration which minimizes the stress. The primary computational problem is to find that configuration. A secondary computational problem, of independent interest, is to find the values of",
"title": ""
},
{
"docid": "neg:1840143_19",
"text": "We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers. The complex structure of the light field is thus reduced to a comparatively low-dimensional representation, which can be decoded in a variety of ways. The different pathways of upconvolution we currently support are for disparity estimation and separation of the lightfield into diffuse and specular intrinsic components. The key idea is that we can jointly perform unsupervised training for the autoencoder path of the network, and supervised training for the other decoders. This way, we find features which are both tailored to the respective tasks and generalize well to datasets for which only example light fields are available. We provide an extensive evaluation on synthetic light field data, and show that the network yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.",
"title": ""
}
] |
1840144 | Spherical symmetric 3D local ternary patterns for natural, texture and biomedical image indexing and retrieval | [
{
"docid": "pos:1840144_0",
"text": "This correspondence introduces a new approach to characterize textures at multiple scales. The performance of wavelet packet spaces are measured in terms of sensitivity and selectivity for the classification of twenty-five natural textures. Both energy and entropy metrics were computed for each wavelet packet and incorporated into distinct scale space representations, where each wavelet packet (channel) reflected a specific scale and orientation sensitivity. Wavelet packet representations for twenty-five natural textures were classified without error by a simple two-layer network classifier. An analyzing function of large regularity ( 0 2 0 ) was shown to be slightly more efficient in representation and discrimination than a similar function with fewer vanishing moments (Ds) . In addition, energy representations computed from the standard wavelet decomposition alone (17 features) provided classification without error for the twenty-five textures included in our study. The reliability exhibited by texture signatures based on wavelet packets analysis suggest that the multiresolution properties of such transforms are beneficial for accomplishing segmentation, classification and subtle discrimination of texture.",
"title": ""
},
{
"docid": "pos:1840144_1",
"text": "A new algorithm for medical image retrieval is presented in the paper. An 8-bit grayscale image is divided into eight binary bit-planes, and then binary wavelet transform (BWT) which is similar to the lifting scheme in real wavelet transform (RWT) is performed on each bitplane to extract the multi-resolution binary images. The local binary pattern (LBP) features are extracted from the resultant BWT sub-bands. Three experiments have been carried out for proving the effectiveness of the proposed algorithm. Out of which two are meant for medical image retrieval and one for face retrieval. It is further mentioned that the database considered for three experiments are OASIS magnetic resonance imaging (MRI) database, NEMA computer tomography (CT) database and PolyU-NIRFD face database. The results after investigation shows a significant improvement in terms of their evaluation measures as compared to LBP and LBP with Gabor transform.",
"title": ""
}
] | [
{
"docid": "neg:1840144_0",
"text": "A novel dual-band microstrip antenna with omnidirectional circularly polarized (CP) and unidirectional CP characteristic for each band is proposed in this communication. Function of dual-band dual-mode is realized based on loading with metamaterial structure. Since the fields of the fundamental modes are most concentrated on the fringe of the radiating patch, modifying the geometry of the radiating patch has little effect on the radiation patterns of the two modes (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0, + 1$</tex></formula> mode). CP property for the omnidirectional zeroth-order resonance (<formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = 0$</tex> </formula> mode) is achieved by employing curved branches in the radiating patch. Then a 45<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{\\circ}$</tex> </formula> inclined rectangular slot is etched in the center of the radiating patch to excite the CP property for the <formula formulatype=\"inline\"><tex Notation=\"TeX\">$n = + 1$</tex></formula> mode. A prototype is fabricated to verify the properties of the antenna. Both simulation and measurement results illustrate that this single-feed antenna is valuable in wireless communication for its low-profile, radiation pattern selectivity and CP characteristic.",
"title": ""
},
{
"docid": "neg:1840144_1",
"text": "Chapters cover topics in areas such as P and NP, space complexity, randomness, computational problems that are (or appear) infeasible to solve, pseudo-random generators, and probabilistic proof systems. The introduction nicely summarizes the material covered in the rest of the book and includes a diagram of dependencies between chapter topics. Initial chapters cover preliminary topics as preparation for the rest of the book. These are more than topical or historical summaries but generally not sufficient to fully prepare the reader for later material. Readers should approach this text already competent at undergraduate-level algorithms in areas such as basic analysis, algorithm strategies, fundamental algorithm techniques, and the basics for determining computability. Elective work in P versus NP or advanced analysis would be valuable but that isn‟t really required.",
"title": ""
},
{
"docid": "neg:1840144_2",
"text": "Vol. 44, No. 6, 2015 We developed a classroom observation protocol for quantitatively measuring student engagement in large university classes. The Behavioral Engagement Related to Instruction (BERI) protocol can be used to provide timely feedback to instructors as to how they can improve student engagement in their classrooms. We tested BERI on seven courses with different instructors and pedagogy. BERI achieved excellent interrater agreement (>95%) with a one-hour training session with new observers. It also showed consistent patterns of variation in engagement with instructor actions and classroom activity. Most notably, it showed that there was substantially higher engagement among the same group of students when interactive teaching methods were used compared with more traditional didactic methods. The same general variations in student engagement with instructional methods were present in all parts of the room and for different instructors. A New Tool for Measuring Student Behavioral Engagement in Large University Classes",
"title": ""
},
{
"docid": "neg:1840144_3",
"text": "Deep learning has given way to a new era of machine learning, apart from computer vision. Convolutional neural networks have been implemented in image classification, segmentation and object detection. Despite recent advancements, we are still in the very early stages and have yet to settle on best practices for network architecture in terms of deep design, small in size and a short training time. In this work, we propose a very deep neural network comprised of 16 Convolutional layers compressed with the Fire Module adapted from the SQUEEZENET model. We also call for the addition of residual connections to help suppress degradation. This model can be implemented on almost every neural network model with fully incorporated residual learning. This proposed model Residual-Squeeze-VGG16 (ResSquVGG16) trained on the large-scale MIT Places365-Standard scene dataset. In our tests, the model performed with accuracy similar to the pre-trained VGG16 model in Top-1 and Top-5 validation accuracy while also enjoying a 23.86% reduction in training time and an 88.4% reduction in size. In our tests, this model was trained from scratch. Keywords— Convolutional Neural Networks; VGG16; Residual learning; Squeeze Neural Networks; Residual-Squeeze-VGG16; Scene Classification; ResSquVGG16.",
"title": ""
},
{
"docid": "neg:1840144_4",
"text": "This paper provides a detailed analysis of a SOI CMOS tunable capacitor for antenna tuning. Design expressions for a switched capacitor network are given and quality factor of the whole network is expressed as a function of design parameters. Application to antenna aperture tuning is described by combining a 130 nm SOI CMOS tunable capacitor with a printed notch antenna. The proposed tunable multiband antenna can be tuned from 420 MHz to 790 MHz, with an associated radiation efficiency in the 33-73% range.",
"title": ""
},
{
"docid": "neg:1840144_5",
"text": "Recently, deep learning and deep neural networks have attracted considerable attention and emerged as one predominant field of research in the artificial intelligence community. The developed techniques have also gained widespread use in various domains with good success, such as automatic speech recognition, information retrieval and text classification, etc. Among them, long short-term memory (LSTM) networks are well suited to such tasks, which can capture long-range dependencies among words efficiently, meanwhile alleviating the gradient vanishing or exploding problem during training effectively. Following this line of research, in this paper we explore a novel use of a Siamese LSTM based method to learn more accurate document representation for text categorization. Such a network architecture takes a pair of documents with variable lengths as the input and utilizes pairwise learning to generate distributed representations of documents that can more precisely render the semantic distance between any pair of documents. In doing so, documents associated with the same semantic or topic label could be mapped to similar representations having a relatively higher semantic similarity. Experiments conducted on two benchmark text categorization tasks, viz. IMDB and 20Newsgroups, show that using a three-layer deep neural network based classifier that takes a document representation learned from the Siamese LSTM sub-networks as the input can achieve competitive performance in relation to several state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840144_6",
"text": "With the surge of mobile internet traffic, Cloud RAN (C-RAN) becomes an innovative architecture to help mobile operators maintain profitability and financial growth as well as to provide better services to the customers. It consists of Base Band Units (BBU) of several base stations, which are co-located in a secured place called Central Office and connected to Radio Remote Heads (RRH) via high bandwidth, low latency links. With BBU centralization in C-RAN, handover, the most important feature for mobile communications, could achieve simplified procedure or improved performance. In this paper, we analyze the handover performance of C-RAN over a baseline decentralized RAN (D-RAN) for GSM, UMTS and LTE systems. The results indicate that, lower total average handover interrupt time could be achieved in GSM thanks to the synchronous nature of handovers in C-RAN. For UMTS, inter-NodeB soft handover in D-RAN would become intra-pool softer handover in C-RAN. This brings some gains in terms of reduced signalling, less Iub transport bearer setup and reduced transport bandwidth requirement. For LTE X2-based inter-eNB handover, C-RAN could reduce the handover delay and to a large extent eliminate the risk of UE losing its connection with the serving cell while still waiting for the handover command, which in turn decrease the handover failure rate.",
"title": ""
},
{
"docid": "neg:1840144_7",
"text": "This study was designed to explore the impact of Yoga and Meditation based lifestyle intervention (YMLI) on cellular aging in apparently healthy individuals. During this 12-week prospective, open-label, single arm exploratory study, 96 apparently healthy individuals were enrolled to receive YMLI. The primary endpoints were assessment of the change in levels of cardinal biomarkers of cellular aging in blood from baseline to week 12, which included DNA damage marker 8-hydroxy-2'-deoxyguanosine (8-OH2dG), oxidative stress markers reactive oxygen species (ROS), and total antioxidant capacity (TAC), and telomere attrition markers telomere length and telomerase activity. The secondary endpoints were assessment of metabotrophic blood biomarkers associated with cellular aging, which included cortisol, β-endorphin, IL-6, BDNF, and sirtuin-1. After 12 weeks of YMLI, there were significant improvements in both the cardinal biomarkers of cellular aging and the metabotrophic biomarkers influencing cellular aging compared to baseline values. The mean levels of 8-OH2dG, ROS, cortisol, and IL-6 were significantly lower and mean levels of TAC, telomerase activity, β-endorphin, BDNF, and sirtuin-1 were significantly increased (all values p < 0.05) post-YMLI. The mean level of telomere length was increased but the finding was not significant (p = 0.069). YMLI significantly reduced the rate of cellular aging in apparently healthy population.",
"title": ""
},
{
"docid": "neg:1840144_8",
"text": "The problem of matching measured latitude/longitude points to roads is becoming increasingly important. This paper describes a novel, principled map matching algorithm that uses a Hidden Markov Model (HMM) to find the most likely road route represented by a time-stamped sequence of latitude/longitude pairs. The HMM elegantly accounts for measurement noise and the layout of the road network. We test our algorithm on ground truth data collected from a GPS receiver in a vehicle. Our test shows how the algorithm breaks down as the sampling rate of the GPS is reduced. We also test the effect of increasing amounts of additional measurement noise in order to assess how well our algorithm could deal with the inaccuracies of other location measurement systems, such as those based on WiFi and cell tower multilateration. We provide our GPS data and road network representation as a standard test set for other researchers to use in their map matching work.",
"title": ""
},
{
"docid": "neg:1840144_9",
"text": "Breast cancer is one of the most common cancer in women worldwide. It is typically diagnosed via histopathological microscopy imaging, for which image analysis can aid physicians for more effective diagnosis. Given a large variability in tissue appearance, to better capture discriminative traits, images can be acquired at different optical magnifications. In this paper, we propose an approach which utilizes joint colour-texture features and a classifier ensemble for classifying breast histopathology images. While we demonstrate the effectiveness of the proposed framework, an important objective of this work is to study the image classification across different optical magnification levels. We provide interesting experimental results and related discussions, demonstrating a visible classification invariance with cross-magnification training-testing. Along with magnification-specific model, we also evaluate the magnification independent model, and compare the two to gain some insights.",
"title": ""
},
{
"docid": "neg:1840144_10",
"text": "This paper presents an overview of the state of the art in reactive power compensation technologies. The principles of operation, design characteristics and application examples of Var compensators implemented with thyristors and self-commutated converters are presented. Static Var generators are used to improve voltage regulation, stability, and power factor in ac transmission and distribution systems. Examples obtained from relevant applications describing the use of reactive power compensators implemented with new static Var technologies are also described.",
"title": ""
},
{
"docid": "neg:1840144_11",
"text": "In this paper, the attitude stabilization problem of an Octorotor with coaxial motors is studied. To this end, the new method of intelligent adaptive control is presented. The designed controller which includes fuzzy and PID controllers, is completed by resistant adaptive function of approximate external disturbance and changing in the dynamic model. In fact, the regulation factor of PID controller is done by the fuzzy logic system. At first, the Fuzzy-PID and PID controllers are simulated in MATLAB/Simulink. Then, the Fuzzy-PID controller is implemented on the Octorotor with coaxial motors as online auto-tuning. Also, LabVIEW software has been used for tests and the performance analysis of the controllers. All of this experimental operation is done in indoor environment in the presence of wind as disturbance in the hovering operation. All of these operations are real-time and telemetry wireless is done by network connection between the robot and ground station in the LABVIEW software. Finally, the controller efficiency and results are studied.",
"title": ""
},
{
"docid": "neg:1840144_12",
"text": "Presentation-specifically, its use of elements from storytelling-is the next logical step in visualization research and should be a focus of at least equal importance with exploration and analysis.",
"title": ""
},
{
"docid": "neg:1840144_13",
"text": "In this paper we present MATISSE 2.0, a microscopic multi-agent based simulation system for the specification and execution of simulation scenarios for Agent-based intelligent Transportation Systems (ATS). In MATISSE, each smart traffic element (e.g., vehicle, intersection control device) is modeled as a virtual agent which continuously senses its surroundings and communicates and collaborates with other agents. MATISSE incorporates traffic control strategies such as contraflow operations and dynamic traffic sign changes. Experimental results show the ability of MATISSE 2.0 to simulate traffic scenarios with thousands of agents on a single PC.",
"title": ""
},
{
"docid": "neg:1840144_14",
"text": "This paper introduces a learning scheme to construct a Hilbert space (i.e., a vector space along its inner product) to address both unsupervised and semi-supervised domain adaptation problems. This is achieved by learning projections from each domain to a latent space along the Mahalanobis metric of the latent space to simultaneously minimizing a notion of domain variance while maximizing a measure of discriminatory power. In particular, we make use of the Riemannian optimization techniques to match statistical properties (e.g., first and second order statistics) between samples projected into the latent space from different domains. Upon availability of class labels, we further deem samples sharing the same label to form more compact clusters while pulling away samples coming from different classes. We extensively evaluate and contrast our proposal against state-of-the-art methods for the task of visual domain adaptation using both handcrafted and deep-net features. Our experiments show that even with a simple nearest neighbor classifier, the proposed method can outperform several state-of-the-art methods benefitting from more involved classification schemes.",
"title": ""
},
{
"docid": "neg:1840144_15",
"text": "Plum pox virus (PPV) causes the most economically-devastating viral disease in Prunus species. Unfortunately, few natural resistance genes are available for the control of PPV. Recessive resistance to some potyviruses is associated with mutations of eukaryotic translation initiation factor 4E (eIF4E) or its isoform eIF(iso)4E. In this study, we used an RNA silencing approach to manipulate the expression of eIF4E and eIF(iso)4E towards the development of PPV resistance in Prunus species. The eIF4E and eIF(iso)4E genes were cloned from plum (Prunus domestica L.). The sequence identity between plum eIF4E and eIF(iso)4E coding sequences is 60.4% at the nucleotide level and 52.1% at the amino acid level. Quantitative real-time RT-PCR analysis showed that these two genes have a similar expression pattern in different tissues. Transgenes allowing the production of hairpin RNAs of plum eIF4E or eIF(iso)4E were introduced into plum via Agrobacterium-mediated transformation. Gene expression analysis confirmed specific reduced expression of eIF4E or eIF(iso)4E in the transgenic lines and this was associated with the accumulation of siRNAs. Transgenic plants were challenged with PPV-D strain and resistance was evaluated by measuring the concentration of viral RNA. Eighty-two percent of the eIF(iso)4E silenced transgenic plants were resistant to PPV, while eIF4E silenced transgenic plants did not show PPV resistance. Physical interaction between PPV-VPg and plum eIF(iso)4E was confirmed. In contrast, no PPV-VPg/eIF4E interaction was observed. These results indicate that eIF(iso)4E is involved in PPV infection in plum, and that silencing of eIF(iso)4E expression can lead to PPV resistance in Prunus species.",
"title": ""
},
{
"docid": "neg:1840144_16",
"text": "We started investigating the collection of HTML tables on the Web and developed the WebTables system a few years ago [4]. Since then, our work has been motivated by applying WebTables in a broad set of applications at Google, resulting in several product launches. In this paper, we describe the challenges faced, lessons learned, and new insights that we gained from our efforts. The main challenges we faced in our efforts were (1) identifying tables that are likely to contain high-quality data (as opposed to tables used for navigation, layout, or formatting), and (2) recovering the semantics of these tables or signals that hint at their semantics. The result is a semantically enriched table corpus that we used to develop several services. First, we created a search engine for structured data whose index includes over a hundred million HTML tables. Second, we enabled users of Google Docs (through its Research Panel) to find relevant data tables and to insert such data into their documents as needed. Most recently, we brought WebTables to a much broader audience by using the table corpus to provide richer tabular snippets for fact-seeking web search queries on Google.com.",
"title": ""
},
{
"docid": "neg:1840144_17",
"text": "This paper presents the design of a new haptic feedback device for transradial myoelectric upper limb prosthesis that allows the amputee person to perceive the sensation of force-gripping and object-sliding. The system designed has three mechanical-actuator units to convey the sensation of force, and one vibrotactile unit to transmit the sensation of object sliding. The device designed will be placed on the user's amputee forearm. In order to validate the design of the structure, a stress analysis through Finite Element Method (FEM) is conducted.",
"title": ""
},
{
"docid": "neg:1840144_18",
"text": "BACKGROUND\nFatigue is one of the common complaints of multiple sclerosis (MS) patients, and its treatment is relatively unclear. Ginseng is one of the herbal medicines possessing antifatigue properties, and its administration in MS for such a purpose has been scarcely evaluated. The purpose of this study was to evaluate the efficacy and safety of ginseng in the treatment of fatigue and the quality of life of MS patients.\n\n\nMETHODS\nEligible female MS patients were randomized in a double-blind manner, to receive 250-mg ginseng or placebo twice daily over 3 months. Outcome measures included the Modified Fatigue Impact Scale (MFIS) and the Iranian version of the Multiple Sclerosis Quality Of Life Questionnaire (MSQOL-54). The questionnaires were used after randomization, and again at the end of the study.\n\n\nRESULTS\nOf 60 patients who were enrolled in the study, 52 (86%) subjects completed the trial with good drug tolerance. Statistical analysis showed better effects for ginseng than the placebo as regards MFIS (p = 0.046) and MSQOL (p ≤ 0.0001) after 3 months. No serious adverse events were observed during follow-up.\n\n\nCONCLUSIONS\nThis study indicates that 3-month ginseng treatment can reduce fatigue and has a significant positive effect on quality of life. Ginseng is probably a good candidate for the relief of MS-related fatigue. Further studies are needed to shed light on the efficacy of ginseng in this field.",
"title": ""
}
] |
1840145 | Large Scale Log Analytics through ELK | [
{
"docid": "pos:1840145_0",
"text": "Big Data Analytics and Deep Learning are two high-focus of data science. Big Data has become important as many organizations both public and private have been collecting massive amounts of domain-specific information, which can contain useful information about problems such as national intelligence, cyber security, fraud detection, marketing, and medical informatics. Companies such as Google and Microsoft are analyzing large volumes of data for business analysis and decisions, impacting existing and future technology. Deep Learning algorithms extract high-level, complex abstractions as data representations through a hierarchical learning process. Complex abstractions are learnt at a given level based on relatively simpler abstractions formulated in the preceding level in the hierarchy. A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized. In the present study, we explore how Deep Learning can be utilized for addressing some important problems in Big Data Analytics, including extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks. We also investigate some aspects of Deep Learning research that need further exploration to incorporate specific challenges introduced by Big Data Analytics, including streaming data, high-dimensional data, scalability of models, and distributed computing. We conclude by presenting insights into relevant future works by posing some questions, including defining data sampling criteria, domain adaptation modeling, defining criteria for obtaining useful data abstractions, improving semantic indexing, semi-supervised learning, and active learning.",
"title": ""
}
] | [
{
"docid": "neg:1840145_0",
"text": "Over the last two decades, an impressive progress has been made in the identification of novel factors in the translocation machineries of the mitochondrial protein import and their possible roles. The role of lipids and possible protein-lipids interactions remains a relatively unexplored territory. Investigating the role of potential lipid-binding regions in the sub-units of the mitochondrial motor might help to shed some more light in our understanding of protein-lipid interactions mechanistically. Bioinformatics results seem to indicate multiple potential lipid-binding regions in each of the sub-units. The subsequent characterization of some of those regions in silico provides insight into the mechanistic functioning of this intriguing and essential part of the protein translocation machinery. Details about the way the regions interact with phospholipids were found by the use of Monte Carlo simulations. For example, Pam18 contains one possible transmembrane region and two tilted surface bound conformations upon interaction with phospholipids. The results demonstrate that the presented bioinformatics approach might be useful in an attempt to expand the knowledge of the possible role of protein-lipid interactions in the mitochondrial protein translocation process.",
"title": ""
},
{
"docid": "neg:1840145_1",
"text": "We present techniques for gathering data that expose errors of automatic predictive models. In certain common settings, traditional methods for evaluating predictive models tend to miss rare but important errors—most importantly, cases for which the model is confident of its prediction (but wrong). In this article, we present a system that, in a game-like setting, asks humans to identify cases that will cause the predictive model-based system to fail. Such techniques are valuable in discovering problematic cases that may not reveal themselves during the normal operation of the system and may include cases that are rare but catastrophic. We describe the design of the system, including design iterations that did not quite work. In particular, the system incentivizes humans to provide examples that are difficult for the model to handle by providing a reward proportional to the magnitude of the predictive model's error. The humans are asked to “Beat the Machine” and find cases where the automatic model (“the Machine”) is wrong. Experiments show that the humans using Beat the Machine identify more errors than do traditional techniques for discovering errors in predictive models, and, indeed, they identify many more errors where the machine is (wrongly) confident it is correct. Furthermore, those cases the humans identify seem to be not simply outliers, but coherent areas missed completely by the model. Beat the Machine identifies the “unknown unknowns.” Beat the Machine has been deployed at an industrial scale by several companies. The main impact has been that firms are changing their perspective on and practice of evaluating predictive models.\n “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are also unknown unknowns. There are things we don't know we don't know.”\n --Donald Rumsfeld",
"title": ""
},
{
"docid": "neg:1840145_2",
"text": "Basic definitions concerning the multi-layer feed-forward neural networks are given. The back-propagation training algorithm is explained. Partial derivatives of the objective function with respect to the weight and threshold coefficients are derived. These derivatives are valuable for an adaptation process of the considered neural network. Training and generalisation of multi-layer feed-forward neural networks are discussed. Improvements of the standard back-propagation algorithm are reviewed. Example of the use of multi-layer feed-forward neural networks for prediction of carbon-13 NMR chemical shifts of alkanes is given. Further applications of neural networks in chemistry are reviewed. Advantages and disadvantages of multilayer feed-forward neural networks are discussed.",
"title": ""
},
{
"docid": "neg:1840145_3",
"text": "In this talk we introduce visible light communication and discuss challenges and techniques to improve the performance of white organic light emitting diode (OLED) based systems.",
"title": ""
},
{
"docid": "neg:1840145_4",
"text": "BACKGROUND\nAndrogenetic alopecia (AGA) is a common form of scalp hair loss that affects up to 50% of males between 18 and 40 years old. Several molecules are commonly used for the treatment of AGA, acting on different steps of its pathogenesis (Minoxidil, Finasteride, Serenoa repens) and show some side effects. In literature, on the basis of hypertrichosis observed in patients treated with analogues of prostaglandin PGF2a, it was supposed that prostaglandins would have an important role in the hair growth: PGE and PGF2a play a positive role, while PGD2 a negative one.\n\n\nOBJECTIVE\nWe carried out a pilot study to evaluate the efficacy of topical cetirizine versus placebo in patients with AGA.\n\n\nPATIENTS AND METHODS\nA sample of 85 patients was recruited, of which 67 were used to assess the effectiveness of the treatment with topical cetirizine, while 18 were control patients.\n\n\nRESULTS\nWe found that the main effect of cetirizine was an increase in total hair density, terminal hair density and diameter variation from T0 to T1, while the vellus hair density shows an evident decrease. The use of a molecule as cetirizine, with no notable side effects, makes possible a good compliance by patients.\n\n\nCONCLUSION\nOur results have shown that topical cetirizine 1% is responsible for a significant improvement of the initial framework of AGA.",
"title": ""
},
{
"docid": "neg:1840145_5",
"text": "This paper proposes a novel diagnosis method for detection and discrimination of two typical mechanical failures in induction motors by stator current analysis: load torque oscillations and dynamic rotor eccentricity. A theoretical analysis shows that each fault modulates the stator current in a different way: torque oscillations lead to stator current phase modulation, whereas rotor eccentricities produce stator current amplitude modulation. The use of traditional current spectrum analysis involves identical frequency signatures with the two fault types. A time-frequency analysis of the stator current with the Wigner distribution leads to different fault signatures that can be used for a more accurate diagnosis. The theoretical considerations and the proposed diagnosis techniques are validated on experimental signals.",
"title": ""
},
{
"docid": "neg:1840145_6",
"text": "This paper presents APT, a localization system for outdoor pedestrians with smartphones. APT performs better than the built-in GPS module of the smartphone in terms of accuracy. This is achieved by introducing a robust dead reckoning algorithm and an error-tolerant algorithm for map matching. When the user is walking with the smartphone, the dead reckoning algorithm monitors steps and walking direction in real time. It then reports new steps and turns to the map-matching algorithm. Based on updated information, this algorithm adjusts the user's location on a map in an error-tolerant manner. If location ambiguity among several routes occurs after adjustments, the GPS module is queried to help eliminate this ambiguity. Evaluations in practice show that the error of our system is less than 1/2 that of GPS.",
"title": ""
},
{
"docid": "neg:1840145_7",
"text": "---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Nowadays people are more interested to express and share their views, feedbacks, suggestions, and opinions about a particular topic on the web. People and company rely more on online opinions about products and services for their decision making. A major problem in identifying the opinion classification is high dimensionality of the feature space. Most of these features are irrelevant, redundant, and noisy which affects the performance of the classifier. Therefore, feature selection is an essential step in the fake review detection to reduce the dimensionality of the feature space and to improve accuracy. In this paper, binary artificial bee colony (BABC) with KNN is proposed to solve feature selection problem for sentiment classification. The experimental results demonstrate that the proposed method selects more informative features set compared to the competitive methods as it attains higher classification accuracy.",
"title": ""
},
{
"docid": "neg:1840145_8",
"text": "Emotion hacking virtual reality (EH-VR) system is an interactive system that hacks one's heartbeat and controls it to accelerate scary VR experience. The EH-VR system provides vibrotactile biofeedback, which resembles a heartbeat, from the floor. The system determines false heartbeat frequency by detecting user's heart rate in real time and calculates false heart rate, which is faster than the one observed according to the quadric equation model. With the system, we demonstrate \"Pressure of unknown\" which is a CG VR space originally created to express the metaphor of scare. A user experiences this space by using a wheel chair as a controller to walk through a VR world displayed via HMD while receiving vibrotac-tile feedback of false heartbeat calculated from its own heart rate from the floor.",
"title": ""
},
{
"docid": "neg:1840145_9",
"text": "Renewable energy sources are essential paths towards sustainable development and CO2 emission reduction. For example, the European Union has set the target of achieving 22% of electricity generation from renewable sources by 2010. However, the extensive use of this energy source is being avoided by some technical problems as fouling and slagging in the surfaces of boiler heat exchangers. Although these phenomena were extensively studied in the last decades in order to optimize the behaviour of large coal power boilers, a simple, general and effective method for fouling control has not been developed. For biomass boilers, the feedstock variability and the presence of new components in ash chemistry increase the fouling influence in boiler performance. In particular, heat transfer is widely affected and the boiler capacity becomes dramatically reduced. Unfortunately, the classical approach of regular sootblowing cycles becomes clearly insufficient for them. Artificial Intelligence (AI) provides new means to undertake this problem. This paper illustrates a methodology based on Neural Networks (NNs) and Fuzzy-Logic Expert Systems to select the moment for activating sootblowing in an industrial biomass boiler. The main aim is to minimize the boiler energy and efficiency losses with a proper sootblowing activation. Although the NN type used in this work is well-known and the Hybrid Systems had been extensively used in the last decade, the excellent results obtained in the use of AI in industrial biomass boilers control with regard to previous approaches makes this work a novelty. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840145_10",
"text": "Thalidomide was originally used to treat morning sickness, but was banned in the 1960s for causing serious congenital birth defects. Remarkably, thalidomide was subsequently discovered to have anti-inflammatory and anti-angiogenic properties, and was identified as an effective treatment for multiple myeloma. A series of immunomodulatory drugs — created by chemical modification of thalidomide — have been developed to overcome the original devastating side effects. Their powerful anticancer properties mean that these drugs are now emerging from thalidomide's shadow as useful anticancer agents.",
"title": ""
},
{
"docid": "neg:1840145_11",
"text": "A key ingredient in the design of visual object classification systems is the identification of relevant class specific aspects while being robust to intra-class variations. While this is a necessity in order to generalize beyond a given set of training images, it is also a very difficult problem due to the high variability of visual appearance within each class. In the last years substantial performance gains on challenging benchmark datasets have been reported in the literature. This progress can be attributed to two developments: the design of highly discriminative and robust image features and the combination of multiple complementary features based on different aspects such as shape, color or texture. In this paper we study several models that aim at learning the correct weighting of different features from training data. These include multiple kernel learning as well as simple baseline methods. Furthermore we derive ensemble methods inspired by Boosting which are easily extendable to several multiclass setting. All methods are thoroughly evaluated on object classification datasets using a multitude of feature descriptors. The key results are that even very simple baseline methods, that are orders of magnitude faster than learning techniques are highly competitive with multiple kernel learning. Furthermore the Boosting type methods are found to produce consistently better results in all experiments. We provide insight of when combination methods can be expected to work and how the benefit of complementary features can be exploited most efficiently.",
"title": ""
},
{
"docid": "neg:1840145_12",
"text": "Sketching on paper is a quick and easy way to communicate ideas. However, many sketch-based systems require people to draw in contrived ways instead of sketching freely as they would on paper. NaturaSketch affords a more natural interface through multiple strokes that overlap, cross, and connect. It also features a meshing algorithm to support multiple strokes of different classifications, which lets users design complex 3D shapes from sketches drawn over existing images. To provide a familiar workflow for object design, a set of sketch annotations can also specify modeling and editing operations. NaturaSketch empowers designers to produce a variety of models quickly and easily.",
"title": ""
},
{
"docid": "neg:1840145_13",
"text": "Automatic extraction of synonyms and/or semantically related words has various applications in Natural Language Processing (NLP). There are currently two mainstream extraction paradigms, namely, lexicon-based and distributional approaches. The former usually suffers from low coverage, while the latter is only able to capture general relatedness rather than strict synonymy. In this paper, two rule-based extraction methods are applied to definitions from a machine-readable dictionary. Extracted synonyms are evaluated in two experiments by solving TOEFL synonym questions and being compared against existing thesauri. The proposed approaches have achieved satisfactory results in both evaluations, comparable to published studies or even the state of the art.",
"title": ""
},
{
"docid": "neg:1840145_14",
"text": "W e present here a QT database designed j b r evaluation of algorithms that detect waveform boundaries in the EGG. T h e dataabase consists of 105 fifteen-minute excerpts of two-channel ECG Holter recordings, chosen to include a broad variety of QRS and ST-T morphologies. Waveform bounda,ries for a subset of beats in, these recordings have been manually determined by expert annotators using a n interactive graphic disp1a.y to view both signals simultaneously and to insert the annotations. Examples of each m,orvhologg were inchded in this subset of uniaotated beats; at least 30 beats in each record, 3622 beats in all, were manually a:anotated in Ihe database. In 11 records, two indepen,dent sets of ennotations have been inchded, to a.llow inter-observer variability slwdies. T h e Q T Databnse is available on a CD-ROM in the format previously used for the MIT-BJH Arrhythmia Database ayad the Euro-pean ST-T Database, from which some of the recordings in the &T Database have been obtained.",
"title": ""
},
{
"docid": "neg:1840145_15",
"text": "Cardiac complications are common after non-cardiac surgery. Peri-operative myocardial infarction occurs in 3% of patients undergoing major surgery. Recently, however, our understanding of the epidemiology of these cardiac events has broadened to include myocardial injury after non-cardiac surgery, diagnosed by an asymptomatic troponin rise, which also carries a poor prognosis. We review the causation of myocardial injury after non-cardiac surgery, with potential for prevention and treatment, based on currently available international guidelines and landmark studies. Postoperative arrhythmias are also a frequent cause of morbidity, with atrial fibrillation and QT-prolongation having specific relevance to the peri-operative period. Postoperative systolic heart failure is rare outside of myocardial infarction or cardiac surgery, but the impact of pre-operative diastolic dysfunction and its ability to cause postoperative heart failure is increasingly recognised. The latest evidence regarding diastolic dysfunction and the impact on non-cardiac surgery are examined to help guide fluid management for the non-cardiac anaesthetist.",
"title": ""
},
{
"docid": "neg:1840145_16",
"text": "Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.",
"title": ""
},
{
"docid": "neg:1840145_17",
"text": "Tracing traffic using commodity hardware in contemporary highspeed access or aggregation networks such as 10-Gigabit Ethernet is an increasingly common yet challenging task. In this paper we investigate if today’s commodity hardware and software is in principle able to capture traffic from a fully loaded Ethernet. We find that this is only possible for data rates up to 1 Gigabit/s without reverting to using special hardware due to, e. g., limitations with the current PC buses. Therefore, we propose a novel way for monitoring higher speed interfaces (e. g., 10-Gigabit) by distributing their traffic across a set of lower speed interfaces (e. g., 1-Gigabit). This opens the next question: which system configuration is capable of monitoring one such 1-Gigabit/s interface? To answer this question we present a methodology for evaluating the performance impact of different system components including different CPU architectures and different operating system. Our results indicate that the combination of AMD Opteron with FreeBSD outperforms all others, independently of running in singleor multi-processor mode. Moreover, the impact of packet filtering, running multiple capturing applications, adding per packet analysis load, saving the captured packets to disk, and using 64-bit OSes is investigated.",
"title": ""
},
{
"docid": "neg:1840145_18",
"text": "In this paper, we introduce a method that automatically builds text classifiers in a new language by training on already labeled data in another language. Our method transfers the classification knowledge across languages by translating the model features and by using an Expectation Maximization (EM) algorithm that naturally takes into account the ambiguity associated with the translation of a word. We further exploit the readily available unlabeled data in the target language via semisupervised learning, and adapt the translated model to better fit the data distribution of the target language.",
"title": ""
}
] |
1840146 | 5G Millimeter-Wave Antenna Array: Design and Challenges | [
{
"docid": "pos:1840146_0",
"text": "The fourth generation wireless communication systems have been deployed or are soon to be deployed in many countries. However, with an explosion of wireless mobile devices and services, there are still some challenges that cannot be accommodated even by 4G, such as the spectrum crisis and high energy consumption. Wireless system designers have been facing the continuously increasing demand for high data rates and mobility required by new wireless applications and therefore have started research on fifth generation wireless systems that are expected to be deployed beyond 2020. In this article, we propose a potential cellular architecture that separates indoor and outdoor scenarios, and discuss various promising technologies for 5G wireless communication systems, such as massive MIMO, energy-efficient communications, cognitive radio networks, and visible light communications. Future challenges facing these potential technologies are also discussed.",
"title": ""
},
{
"docid": "pos:1840146_1",
"text": "The ever growing traffic explosion in mobile communications has recently drawn increased attention to the large amount of underutilized spectrum in the millimeter-wave frequency bands as a potentially viable solution for achieving tens to hundreds of times more capacity compared to current 4G cellular networks. Historically, mmWave bands were ruled out for cellular usage mainly due to concerns regarding short-range and non-line-of-sight coverage issues. In this article, we present recent results from channel measurement campaigns and the development of advanced algorithms and a prototype, which clearly demonstrate that the mmWave band may indeed be a worthy candidate for next generation (5G) cellular systems. The results of channel measurements carried out in both the United States and Korea are summarized along with the actual free space propagation measurements in an anechoic chamber. Then a novel hybrid beamforming scheme and its link- and system-level simulation results are presented. Finally, recent results from our mmWave prototyping efforts along with indoor and outdoor test results are described to assert the feasibility of mmWave bands for cellular usage.",
"title": ""
}
] | [
{
"docid": "neg:1840146_0",
"text": "People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how information from different modality sources relate to each other. This paper introduces to the scientific community the first opinion-level annotated corpus of sentiment and subjectivity analysis in online videos called Multimodal Opinionlevel Sentiment Intensity dataset (MOSI). The dataset is rigorously annotated with labels for subjectivity, sentiment intensity, per-frame and per-opinion annotated visual features, and per-milliseconds annotated audio features. Furthermore, we present baselines for future studies in this direction as well as a new multimodal fusion approach that jointly models spoken words and visual gestures.",
"title": ""
},
{
"docid": "neg:1840146_1",
"text": "In highly regulated industries such as aerospace, the introduction of new quality standard can provide the framework for developing and formulating innovative novel business models which become the foundation to build a competitive, customer-centric enterprise. A number of enterprise modeling methods have been developed in recent years mainly to offer support for enterprise design and help specify systems requirements and solutions. However, those methods are inefficient in providing sufficient support for quality systems links and assessment. The implementation parts of the processes linked to the standards remain unclear and ambiguous for the practitioners as a result of new standards introduction. This paper proposed to integrate new revision of AS/EN9100 aerospace quality elements through systematic integration approach which can help the enterprises in business re-engineering process. The assessment capability model is also presented to identify impacts on the existing system as a result of introducing new standards.",
"title": ""
},
{
"docid": "neg:1840146_2",
"text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.",
"title": ""
},
{
"docid": "neg:1840146_3",
"text": "BACKGROUND\nFetal tachyarrhythmia may result in low cardiac output and death. Consequently, antiarrhythmic treatment is offered in most affected pregnancies. We compared 3 drugs commonly used to control supraventricular tachycardia (SVT) and atrial flutter (AF).\n\n\nMETHODS AND RESULTS\nWe reviewed 159 consecutive referrals with fetal SVT (n=114) and AF (n=45). Of these, 75 fetuses with SVT and 36 with AF were treated nonrandomly with transplacental flecainide (n=35), sotalol (n=52), or digoxin (n=24) as a first-line agent. Prenatal treatment failure was associated with an incessant versus intermittent arrhythmia pattern (n=85; hazard ratio [HR]=3.1; P<0.001) and, for SVT, with fetal hydrops (n=28; HR=1.8; P=0.04). Atrial flutter had a lower rate of conversion to sinus rhythm before delivery than SVT (HR=2.0; P=0.005). Cardioversion at 5 and 10 days occurred in 50% and 63% of treated SVT cases, respectively, but in only 25% and 41% of treated AF cases. Sotalol was associated with higher rates of prenatal AF termination than digoxin (HR=5.4; P=0.05) or flecainide (HR=7.4; P=0.03). If incessant AF/SVT persisted to day 5 (n=45), median ventricular rates declined more with flecainide (-22%) and digoxin (-13%) than with sotalol (-5%; P<0.001). Flecainide (HR=2.1; P=0.02) and digoxin (HR=2.9; P=0.01) were also associated with a higher rate of conversion of fetal SVT to a normal rhythm over time. No serious drug-related adverse events were observed, but arrhythmia-related mortality was 5%.\n\n\nCONCLUSION\nFlecainide and digoxin were superior to sotalol in converting SVT to a normal rhythm and in slowing both AF and SVT to better-tolerated ventricular rates and therefore might be considered first to treat significant fetal tachyarrhythmia.",
"title": ""
},
{
"docid": "neg:1840146_4",
"text": "This paper briefly introduces an approach to the problem of building semantic interpretations of nominal ComDounds, i.e. sequences of two or more nouns related through modification. Examples of the kinds of nominal compounds dealt with are: \"engine repairs\", \"aircraft flight arrival\", ~aluminum water pump\", and \"noun noun modification\".",
"title": ""
},
{
"docid": "neg:1840146_5",
"text": "Due to the increase of the number of wind turbines connected directly to the electric utility grid, new regulator codes have been issued that require low-voltage ride-through capability for wind turbines so that they can remain online and support the electric grid during voltage sags. Conventional ride-through techniques for the doubly fed induction generator (DFIG) architecture result in compromised control of the turbine shaft and grid current during fault events. In this paper, a series passive-impedance network at the stator side of a DFIG wind turbine is presented. It is easy to control, capable of off-line operation for high efficiency, and low cost for manufacturing and maintenance. The balanced and unbalanced fault responses of a DFIG wind turbine with a series grid side passive-impedance network are examined using computer simulations and hardware experiments.",
"title": ""
},
{
"docid": "neg:1840146_6",
"text": "Embedded quantization is a mechanism employed by many lossy image codecs to progressively refine the distortion of a (transformed) image. Currently, the most common approach to do so in the context of wavelet-based image coding is to couple uniform scalar deadzone quantization (USDQ) with bitplane coding (BPC). USDQ+BPC is convenient for its practicality and has proved to achieve competitive coding performance. But the quantizer established by this scheme does not allow major variations. This paper introduces a multistage quantization scheme named general embedded quantization (GEQ) that provides more flexibility to the quantizer. GEQ schemes can be devised for specific decoding rates achieving optimal coding performance. Practical approaches of GEQ schemes achieve coding performance similar to that of USDQ+BPC while requiring fewer quantization stages. The performance achieved by GEQ is evaluated in this paper through experimental results carried out in the framework of modern image coding systems.",
"title": ""
},
{
"docid": "neg:1840146_7",
"text": "In this paper we have introduced the notion of distance between two single valued neutrosophic sets and studied its properties. We have also defined several similarity measures between them and investigated their characteristics. A measure of entropy of a single valued neutrosophic set has also been introduced.",
"title": ""
},
{
"docid": "neg:1840146_8",
"text": "Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.",
"title": ""
},
{
"docid": "neg:1840146_9",
"text": "The number of adult learners who participate in online learning has rapidly grown in the last two decades due to online learning's many advantages. In spite of the growth, the high dropout rate in online learning has been of concern to many higher education institutions and organizations. The purpose of this study was to determine whether persistent learners and dropouts are different in individual characteristics (i.e., age, gender, and educational level), external factors (i.e., family and organizational supports), and internal factors (i.e., satisfaction and relevance as sub-dimensions of motivation). Quantitative data were collected from 147 learners who had dropped out of or finished one of the online courses offered from a large Midwestern university. Dropouts and persistent learners showed statistical differences in perceptions of family and organizational support, and satisfaction and relevance. It was also shown that the theoretical framework, which includes family support, organizational support, satisfaction, and relevance in addition to individual characteristics, is able to predict learners' decision to drop out or persist. Organizational 9upport and relevance were shown to be particularly predictive. The results imply that lower dropout rates can be achieved if online program developers or instrdctors find ways to enhance the relevance of the course. It also implies thai adult learners need to be supported by their organizations in order for them to finish online courses that they register for.",
"title": ""
},
{
"docid": "neg:1840146_10",
"text": "^^ir jEdmund Hillary of Mount Everest \\ fajne liked to tell a story about one of ^J Captain Robert Falcon Scott's earlier attempts, from 1901 to 1904, to reach the South Pole. Scott led an expedition made up of men from thb Royal Navy and the merchant marine, as jwell as a group of scientists. Scott had considel'able trouble dealing with the merchant n|arine personnel, who were unaccustomed ip the rigid discipline of Scott's Royal Navy. S|:ott wanted to send one seaman home because he would not take orders, but the seaman refused, arguing that he had signed a contract and knew his rights. Since the seaman wds not subject to Royal Navy disciplinary action, Scott did not know what to do. Then Ernest Shackleton, a merchant navy officer in $cott's party, calmly informed the seaman th^t he, the seaman, was returning to Britain. Again the seaman refused —and Shackle^on knocked him to the ship's deck. After ar^other refusal, followed by a second flooring, the seaman decided he would retuijn home. Scott later became one of the victims of his own inadequacies as a leader in his 1911 race to the South Pole. Shackleton went qn to lead many memorable expeditions; once, seeking help for the rest of his party, who were stranded on the Antarctic Coast, he journeyed with a small crew in a small open boat from the edge of Antarctica to Souilh Georgia Island.",
"title": ""
},
{
"docid": "neg:1840146_11",
"text": "Synaptic plasticity is considered to be the biological substrate of learning and memory. In this document we review phenomenological models of short-term and long-term synaptic plasticity, in particular spike-timing dependent plasticity (STDP). The aim of the document is to provide a framework for classifying and evaluating different models of plasticity. We focus on phenomenological synaptic models that are compatible with integrate-and-fire type neuron models where each neuron is described by a small number of variables. This implies that synaptic update rules for short-term or long-term plasticity can only depend on spike timing and, potentially, on membrane potential, as well as on the value of the synaptic weight, or on low-pass filtered (temporally averaged) versions of the above variables. We examine the ability of the models to account for experimental data and to fulfill expectations derived from theoretical considerations. We further discuss their relations to teacher-based rules (supervised learning) and reward-based rules (reinforcement learning). All models discussed in this paper are suitable for large-scale network simulations.",
"title": ""
},
{
"docid": "neg:1840146_12",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "neg:1840146_13",
"text": "Neural network are most popular in the research community due to its generalization abilities. Additionally, it has been successfully implemented in biometrics, features selection, object tracking, document image preprocessing and classification. This paper specifically, clusters, summarize, interpret and evaluate neural networks in document Image preprocessing. The importance of the learning algorithms in neural networks training and testing for preprocessing is also highlighted. Finally, a critical analysis on the reviewed approaches and the future research guidelines in the field are suggested.",
"title": ""
},
{
"docid": "neg:1840146_14",
"text": "Selecting appropriate words to compose a sentence is one common problem faced by non-native Chinese learners. In this paper, we propose (bidirectional) LSTM sequence labeling models and explore various features to detect word usage errors in Chinese sentences. By combining CWINDOW word embedding features and POS information, the best bidirectional LSTM model achieves accuracy 0.5138 and MRR 0.6789 on the HSK dataset. For 80.79% of the test data, the model ranks the groundtruth within the top two at position level.",
"title": ""
},
{
"docid": "neg:1840146_15",
"text": "Substantial evidence suggests that the accumulation of beta-amyloid (Abeta)-derived peptides contributes to the aetiology of Alzheimer's disease (AD) by stimulating formation of free radicals. Thus, the antioxidant alpha-lipoate, which is able to cross the blood-brain barrier, would seem an ideal substance in the treatment of AD. We have investigated the potential effectiveness of alpha-lipoic acid (LA) against cytotoxicity induced by Abeta peptide (31-35) (30 microM) and hydrogen peroxide (H(2)O(2)) (100 microM) with the cellular 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) reduction and fluorescence dye propidium iodide assays in primary neurons of rat cerebral cortex. We found that treatment with LA protected cortical neurons against cytotoxicity induced by Abeta or H(2)O(2). In addition, LA-induced increase in the level of Akt in the neurons was observed by Western blot. The LA-induced neuroprotection and Akt increase were attenuated by pre-treatment with the phosphatidylinositol 3-kinase inhibitor, LY294002 (50 microM). Our data suggest that the neuroprotective effects of the antioxidant LA are partly mediated through activation of the PKB/Akt signaling pathway.",
"title": ""
},
{
"docid": "neg:1840146_16",
"text": "Despite their remarkable performance in various machine intelligence tasks, the computational intensity of Convolutional Neural Networks (CNNs) has hindered their widespread utilization in resource-constrained embedded and IoT systems. To address this problem, we present a framework for synthesis of efficient CNN inference software targeting mobile SoC platforms. We argue that thread granularity can substantially impact the performance and energy dissipation of the synthesized inference software, and demonstrate that launching the maximum number of logical threads, often promoted as a guiding principle by GPGPU practitioners, does not result in an efficient implementation for mobile SoCs. We hypothesize that the runtime of a CNN layer on a particular SoC platform can be accurately estimated as a linear function of its computational complexity, which may seem counter-intuitive, as modern mobile SoCs utilize a plethora of heterogeneous architectural features and dynamic resource management policies. Consequently, we develop a principled approach and a data-driven analytical model to optimize granularity of threads during CNN software synthesis. Experimental results with several modern CNNs mapped to a commodity Android smartphone with a Snapdragon SoC show up to 2.37X speedup in application runtime, and up to 1.9X improvement in its energy dissipation compared to existing approaches.",
"title": ""
},
{
"docid": "neg:1840146_17",
"text": "A method is presented for locating protein antigenic determinants by analyzing amino acid sequences in order to find the point of greatest local hydrophilicity. This is accomplished by assigning each amino acid a numerical value (hydrophilicity value) and then repetitively averaging these values along the peptide chain. The point of highest local average hydrophilicity is invariably located in, or immediately adjacent to, an antigenic determinant. It was found that the prediction success rate depended on averaging group length, with hexapeptide averages yielding optimal results. The method was developed using 12 proteins for which extensive immunochemical analysis has been carried out and subsequently was used to predict antigenic determinants for the following proteins: hepatitis B surface antigen, influenza hemagglutinins, fowl plague virus hemagglutinin, human histocompatibility antigen HLA-B7, human interferons, Escherichia coli and cholera enterotoxins, ragweed allergens Ra3 and Ra5, and streptococcal M protein. The hepatitis B surface antigen sequence was synthesized by chemical means and was shown to have antigenic activity by radioimmunoassay.",
"title": ""
},
{
"docid": "neg:1840146_18",
"text": "Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy produced by Kruskal’s spanning tree algorithm. In this respect, we also propose a new effective feature selection approach for improving system efficiency. The results show that LSP, if correctly parameterized, produces the same performance as LSSVM, being at the same time much more efficient.",
"title": ""
},
{
"docid": "neg:1840146_19",
"text": "Reading comprehension tasks test the ability of models to process long-term context and remember salient information. Recent work has shown that relatively simple neural methods such as the Attention Sum-Reader can perform well on these tasks; however, these systems still significantly trail human performance. Analysis suggests that many of the remaining hard instances are related to the inability to track entity-references throughout documents. This work focuses on these hard entity tracking cases with two extensions: (1) additional entity features, and (2) training with a multi-task tracking objective. We show that these simple modifications improve performance both independently and in combination, and we outperform the previous state of the art on the LAMBADA dataset, particularly on difficult entity examples.",
"title": ""
}
] |
1840147 | The relational self: an interpersonal social-cognitive theory. | [
{
"docid": "pos:1840147_0",
"text": "Negative (adverse or threatening) events evoke strong and rapid physiological, cognitive, emotional, and social responses. This mobilization of the organism is followed by physiological, cognitive, and behavioral responses that damp down, minimize, and even erase the impact of that event. This pattern of mobilization-minimization appears to be greater for negative events than for neutral or positive events. Theoretical accounts of this response pattern are reviewed. It is concluded that no single theoretical mechanism can explain the mobilization-minimization pattern, but that a family of integrated process models, encompassing different classes of responses, may account for this pattern of parallel but disparately caused effects.",
"title": ""
}
] | [
{
"docid": "neg:1840147_0",
"text": "over concrete thinking Understand that virtual objects are computer generated, and they do not need to obey physical laws",
"title": ""
},
{
"docid": "neg:1840147_1",
"text": "The present study aims to design, develop, operate and evaluate a social media GIS (Geographical Information Systems) specially tailored to mash-up the information that local residents and governments provide to support information utilization from normal times to disaster outbreak times in order to promote disaster reduction. The conclusions of the present study are summarized in the following three points. (1) Social media GIS, an information system which integrates a Web-GIS, an SNS and Twitter in addition to an information classification function, a button function and a ranking function into a single system, was developed. This made it propose an information utilization system based on the assumption of disaster outbreak times when information overload happens as well as normal times. (2) The social media GIS was operated for fifty local residents who are more than 18 years old for ten weeks in Mitaka City of Tokyo metropolis. Although about 32% of the users were in their forties, about 30% were aged fifties, and more than 10% of the users were in their twenties, thirties and sixties or more. (3) The access survey showed that 260 pieces of disaster information were distributed throughout the whole city of Mitaka. Among the disaster information, danger-related information occupied 20%, safety-related information occupied 68%, and other information occupied 12%. Keywords—Social Media GIS; Web-GIS; SNS; Twitter; Disaster Information; Disaster Reduction; Support for Information Utilization",
"title": ""
},
{
"docid": "neg:1840147_2",
"text": "We report on the design objectives and initial design of a new discrete-event network simulator for the research community. Creating Yet Another Network Simulator (yans, http://yans.inria.fr/yans) is not the sort of prospect network researchers are happy to contemplate, but this effort may be timely given that ns-2 is considering a major revision and is evaluating new simulator cores. We describe why we did not choose to build on existing tools such as ns-2, GTNetS, and OPNET, outline our functional requirements, provide a high-level view of the architecture and core components, and describe a new IEEE 802.11 model provided with yans.",
"title": ""
},
{
"docid": "neg:1840147_3",
"text": "We propose gradient adversarial training, an auxiliary deep learning framework applicable to different machine learning problems. In gradient adversarial training, we leverage a prior belief that in many contexts, simultaneous gradient updates should be statistically indistinguishable from each other. We enforce this consistency using an auxiliary network that classifies the origin of the gradient tensor, and the main network serves as an adversary to the auxiliary network in addition to performing standard task-based training. We demonstrate gradient adversarial training for three different scenarios: (1) as a defense to adversarial examples we classify gradient tensors and tune them to be agnostic to the class of their corresponding example, (2) for knowledge distillation, we do binary classification of gradient tensors derived from the student or teacher network and tune the student gradient tensor to mimic the teacher’s gradient tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable. For each of the three scenarios we show the potential of gradient adversarial training procedure. Specifically, gradient adversarial training increases the robustness of a network to adversarial attacks, is able to better distill the knowledge from a teacher network to a student network compared to soft targets, and boosts multi-task learning by aligning the gradient tensors derived from the task specific loss functions. Overall, our experiments demonstrate that gradient tensors contain latent information about whatever tasks are being trained, and can support diverse machine learning problems when intelligently guided through adversarialization using a auxiliary network.",
"title": ""
},
{
"docid": "neg:1840147_4",
"text": "The research tries to identify factors that are critical for a Big Data project’s success. In total 27 success factors could be identified throughout the analysis of these published case studies. Subsequently, to the identification the success factors were categorized according to their importance for the project’s success. During the categorization process 6 out of the 27 success factors were declared mission critical. Besides this identification of success factors, this thesis provides a process model, as a suggested way to approach Big Data projects. The process model is divided into separate phases. In addition to a description of the tasks to fulfil, the identified success factors are assigned to the individual phases of the analysis process. Finally, this thesis provides a process model for Big Data projects and also assigns success factors to individual process stages, which are categorized according to their importance for the success of the entire project.",
"title": ""
},
{
"docid": "neg:1840147_5",
"text": "Automatic estrus detection techniques in dairy cows have been present by different traits. Pedometers and accelerators are the most common sensor equipment. Most of the detection methods are associated with the supervised classification technique, which the training set becomes a crucial reference. The training set obtained by visual observation is subjective and time consuming. Another limitation of this approach is that it usually does not consider the factors affecting successful alerts, such as the discriminative figure, activity type of cows, the location and direction of the sensor node placed on the neck collar of a cow. This paper presents a novel estrus detection method that uses k-means clustering algorithm to create the training set online for each cow. And the training set is finally used to build an activity classification model by SVM. The activity index counted by the classification results in each sampling period can measure cow’s activity variation for assessing the onset of estrus. The experimental results indicate that the peak of estrus time are higher than that of non-estrus time at least twice in the activity index curve, and it can enhance the sensitivity and significantly reduce the error rate.",
"title": ""
},
{
"docid": "neg:1840147_6",
"text": "Structure-function studies with mammalian reoviruses have been limited by the lack of a reverse-genetic system for engineering mutations into the viral genome. To circumvent this limitation in a partial way for the major outer-capsid protein sigma3, we obtained in vitro assembly of large numbers of virion-like particles by binding baculovirus-expressed sigma3 protein to infectious subvirion particles (ISVPs) that lack sigma3. A level of sigma3 binding approaching 100% of that in native virions was routinely achieved. The sigma3 coat in these recoated ISVPs (rcISVPs) appeared very similar to that in virions by electron microscopy and three-dimensional image reconstruction. rcISVPs retained full infectivity in murine L cells, allowing their use to study sigma3 functions in virus entry. Upon infection, rcISVPs behaved identically to virions in showing an extended lag phase prior to exponential growth and in being inhibited from entering cells by either the weak base NH4Cl or the cysteine proteinase inhibitor E-64. rcISVPs also mimicked virions in being incapable of in vitro activation to mediate lysis of erythrocytes and transcription of the viral mRNAs. Last, rcISVPs behaved like virions in showing minor loss of infectivity at 52 degrees C. Since rcISVPs contain virion-like levels of sigma3 but contain outer-capsid protein mu1/mu1C mostly cleaved at the delta-phi junction as in ISVPs, the fact that rcISVPs behaved like virions (and not ISVPs) in all of the assays that we performed suggests that sigma3, and not the delta-phi cleavage of mu1/mu1C, determines the observed differences in behavior between virions and ISVPs. To demonstrate the applicability of rcISVPs for genetic studies of protein functions in reovirus entry (an approach that we call recoating genetics), we used chimeric sigma3 proteins to localize the primary determinants of a strain-dependent difference in sigma3 cleavage rate to a carboxy-terminal region of the ISVP-bound protein.",
"title": ""
},
{
"docid": "neg:1840147_7",
"text": "We present a family with a Robertsonian translocation (RT) 15;21 and an inv(21)(q21.1q22.1) which was ascertained after the birth of a child with Down syndrome. Karyotyping revealed a translocation trisomy 21 in the patient. The mother was a carrier of a paternally inherited RT 15;21. Additionally, she and her mother showed a rare paracentric inversion of chromosome 21 which could not be observed in the Down syndrome patient. Thus, we concluded that the two free chromosomes 21 in the patient were of paternal origin. Remarkably, short tandem repeat (STR) typing revealed that the proband showed one paternal allele but two maternal alleles, indicating a maternal origin of the supernumerary chromosome 21. Due to the fact that chromosome analysis showed structurally normal chromosomes 21, a re-inversion of the free maternally inherited chromosome 21 must have occurred. Re-inversion and meiotic segregation error may have been co-incidental but unrelated events. Alternatively, the inversion or RT could have predisposed to maternal non-disjunction.",
"title": ""
},
{
"docid": "neg:1840147_8",
"text": "In this paper, we take an input-output approach to enhance the study of cooperative multiagent optimization problems that admit decentralized and selfish solutions, hence eliminating the need for an interagent communication network. The framework under investigation is a set of $n$ independent agents coupled only through an overall cost that penalizes the divergence of each agent from the average collective behavior. In the case of identical agents, or more generally agents with identical essential input-output dynamics, we show that optimal decentralized and selfish solutions are possible in a variety of standard input-output cost criteria. These include the cases of $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced, and $\\mathcal{H}_{2}$ norms for any finite $n$. Moreover, if the cost includes non-deviation from average variables, the above results hold true as well for $\\ell_{1}, \\ell_{2}, \\ell_{\\infty}$ induced norms and any $n$, while they hold true for the normalized, per-agent square $\\mathcal{H}_{2}$ norm, cost as $n\\rightarrow\\infty$. We also consider the case of nonidentical agent dynamics and prove that similar results hold asymptotically as $n\\rightarrow\\infty$ in the case of $\\ell_{2}$ induced norms (i.e., $\\mathcal{H}_{\\infty}$) under a growth assumption on the $\\mathcal{H}_{\\infty}$ norm of the essential dynamics of the collective.",
"title": ""
},
{
"docid": "neg:1840147_9",
"text": "Automatic genre classification of music is an important topic in Music Information Retrieval with many interesting applications. A solution to genre classification would allow for machine tagging of songs, which could serve as metadata for building song recommenders. In this paper, we investigate the following question: Given a song, can we automatically detect its genre? We look at three characteristics of a song to determine its genre: timbre, chord transitions, and lyrics. For each method, we develop multiple data models and apply supervised machine learning algorithms including k-means, k-NN, multi-class SVM and Naive Bayes. We are able to accurately classify 65− 75% of the songs from each genre in a 5-genre classification problem between Rock, Jazz, Pop, Hip-Hop, and Metal music.",
"title": ""
},
{
"docid": "neg:1840147_10",
"text": "Article history: Received 10 September 2012 Received in revised form 12 March 2013 Accepted 24 March 2013 Available online 23 April 2013",
"title": ""
},
{
"docid": "neg:1840147_11",
"text": "Presents a collection of slides covering the following topics:issues and challenges in power distribution network design; basics of power supply induced jitter (PSIJ) modeling; PSIJ design and modeling for key applications; and memory and parallel bus interfaces (serial links and digital logic timing).",
"title": ""
},
{
"docid": "neg:1840147_12",
"text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.",
"title": ""
},
{
"docid": "neg:1840147_13",
"text": "Context-awareness is a key concept in ubiquitous computing. But to avoid developing dedicated context-awareness sub-systems for specific application areas there is a need for more generic programming frameworks. Such frameworks can help the programmer to develop and deploy context-aware applications faster. This paper describes the Java Context-Awareness Framework – JCAF, which is a Java-based context-awareness infrastructure and programming API for creating context-aware computer applications. The paper presents the design principles behind JCAF, its runtime architecture, and its programming API. The paper presents some applications of using JCAF in three different applications and discusses lessons learned from using JCAF.",
"title": ""
},
{
"docid": "neg:1840147_14",
"text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.",
"title": ""
},
{
"docid": "neg:1840147_15",
"text": "In today’s world, almost everybody is affluent with computers and network based technology is growing by leaps and bounds. So, network security has become very important, rather an inevitable part of computer system. An Intrusion Detection System (IDS) is designed to detect system attacks and classify system activities into normal and abnormal form. Machine learning techniques have been applied to intrusion detection systems which have an important role in detecting Intrusions. This paper reviews different machine approaches for Intrusion detection system. This paper also presents the system design of an Intrusion detection system to reduce false alarm rate and improve accuracy to detect intrusion.",
"title": ""
},
{
"docid": "neg:1840147_16",
"text": "We propose a neural language model capable of unsupervised syntactic structure induction. The model leverages the structure information to form better semantic representations and better language modeling. Standard recurrent neural networks are limited by their structure and fail to efficiently use syntactic information. On the other hand, tree-structured recursive networks usually require additional structural supervision at the cost of human expert annotation. In this paper, We propose a novel neural language model, called the Parsing-Reading-Predict Networks (PRPN), that can simultaneously induce the syntactic structure from unannotated sentences and leverage the inferred structure to learn a better language model. In our model, the gradient can be directly back-propagated from the language model loss into the neural parsing network. Experiments show that the proposed model can discover the underlying syntactic structure and achieve state-of-the-art performance on word/character-level language model tasks.",
"title": ""
},
{
"docid": "neg:1840147_17",
"text": "The challenge of combatting malware designed to breach air-gap isolation in order to leak data.",
"title": ""
},
{
"docid": "neg:1840147_18",
"text": "BACKGROUND\nAutologous platelet-rich plasma has attracted attention in various medical fields recently, including orthopedic, plastic, and dental surgeries and dermatology for its wound healing ability. Further, it has been used clinically in mesotherapy for skin rejuvenation.\n\n\nOBJECTIVE\nIn this study, the effects of activated platelet-rich plasma (aPRP) and activated platelet-poor plasma (aPPP) have been investigated on the remodelling of the extracellular matrix, a process that requires activation of dermal fibroblasts, which is essential for rejuvenation of aged skin.\n\n\nMETHODS\nPlatelet-rich plasma (PRP) and platelet-poor plasma (PPP) were prepared using a double-spin method and then activated with thrombin and calcium chloride. The proliferative effects of aPRP and aPPP were measured by [(3)H]thymidine incorporation assay, and their effects on matrix protein synthesis were assessed by quantifying levels of procollagen type I carboxy-terminal peptide (PIP) by enzyme-linked immunosorbent assay (ELISA). The production of collagen and matrix metalloproteinases (MMP) was studied by Western blotting and reverse transcriptase-polymerase chain reaction.\n\n\nRESULTS\nPlatelet numbers in PRP increased to 9.4-fold over baseline values. aPRP and aPPP both stimulated cell proliferation, with peak proliferation occurring in cells grown in 5% aPRP. Levels of PIP were highest in cells grown in the presence of 5% aPRP. Additionally, aPRP and aPPP increased the expression of type I collagen, MMP-1 protein, and mRNA in human dermal fibroblasts.\n\n\nCONCLUSION\naPRP and aPPP promote tissue remodelling in aged skin and may be used as adjuvant treatment to lasers for skin rejuvenation in cosmetic dermatology.",
"title": ""
},
{
"docid": "neg:1840147_19",
"text": "Recently, tag recommendation (TR) has become a very hot research topic in data mining and related areas. However, neither co-occurrence based methods which only use the item-tag matrix nor content based methods which only use the item content information can achieve satisfactory performance in real TR applications. Hence, how to effectively combine the item-tag matrix, item content information, and other auxiliary information into the same recommendation framework is the key challenge for TR. In this paper, we first adapt the collaborative topic regression (CTR) model, which has been successfully applied for article recommendation, to combine both item-tag matrix and item content information for TR. Furthermore, by extending CTR we propose a novel hierarchical Bayesian model, called CTR with social regularization (CTR-SR), to seamlessly integrate the item-tag matrix, item content information, and social networks between items into the same principled model. Experiments on real data demonstrate the effectiveness of our proposed models.",
"title": ""
}
] |
1840148 | Secret Intelligence Service Room No . 15 Acting like a Tough Guy : Violent-Sexist Video Games , Identification with Game Characters , Masculine Beliefs , and Empathy for Female Violence Victims | [
{
"docid": "pos:1840148_0",
"text": "a r t i c l e i n f o To address the longitudinal relation between adolescents' habitual usage of media violence and aggressive behavior and empathy, N = 1237 seventh and eighth grade high school students in Germany completed measures of violent and nonviolent media usage, aggression, and empathy twice in twelve months. Cross-lagged panel analyses showed significant pathways from T1 media violence usage to higher physical aggression and lower empathy at T2. The reverse paths from T1 aggression or empathy to T2 media violence usage were nonsignificant. The links were similar for boys and girls. No links were found between exposure to nonviolent media and aggression or between violent media and relational aggression. T1 physical aggression moderated the impact of media violence usage, with stronger effects of media violence usage among the low aggression group. Introduction Despite the rapidly growing body of research addressing the potentially harmful effects of exposure to violent media, the evidence currently available is still limited in several ways. First, there is a shortage of longitudinal research examining the associations of media violence usage and aggression over time. Such evidence is crucial for examining hypotheses about the causal directions of observed co-variations of media violence usage and aggression that cannot be established on the basis of cross-sectional research. Second, most of the available longitudinal evidence has focused on aggression as the critical outcome variable, giving comparatively little attention to other potentially harmful effects, such as a decrease in empathy with others in distress. Third, the vast majority of studies available to date were conducted in North America. However, even in the age of globalization, patterns of media violence usage and their cultural contexts may vary considerably, calling for a wider database from different countries to examine the generalizability of results to address each of these aspects. It presents findings from a longitudinal study with a large sample of early adolescents in Germany, relating habitual usage of violence in movies, TV series, and interactive video games to self-reports of physical aggression and empathy over a period of twelve months. The study focused on early adolescence as a developmental period characterized by a confluence of risk factors as a result of biological, psychological, and social changes for a range of adverse outcomes. Regular media violence usage may significantly contribute to the overall risk of aggression as one such negative outcome. Media consumption increases from childhood …",
"title": ""
}
] | [
{
"docid": "neg:1840148_0",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
},
{
"docid": "neg:1840148_1",
"text": "With the fast progression of data exchange in electronic way, information security is becoming more important in data storage and transmission. Because of widely using images in industrial process, it is important to protect the confidential image data from unauthorized access. In this paper, we analyzed current image encryption algorithms and compression is added for two of them (Mirror-like image encryption and Visual Cryptography). Implementations of these two algorithms have been realized for experimental purposes. The results of analysis are given in this paper. Keywords—image encryption, image cryptosystem, security, transmission.",
"title": ""
},
{
"docid": "neg:1840148_2",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840148_3",
"text": "In viticulture, there are several applications where bud detection in vineyard images is a necessary task, susceptible of being automated through the use of computer vision methods. A common and effective family of visual detection algorithms are the scanning-window type, that slide a (usually) fixed size window along the original image, classifying each resulting windowed-patch as containing or not containing the target object. The simplicity of these algorithms finds its most challenging aspect in the classification stage. Interested in grapevine buds detection in natural field conditions, this paper presents a classification method for images of grapevine buds ranging 100 to 1600 pixels in diameter, captured in outdoor, under natural field conditions, in winter (i.e., no grape bunches, very few leaves, and dormant buds), without artificial background, and with minimum equipment requirements. The proposed method uses well-known computer vision technologies: Scale-Invariant Feature Transform for calculating low-level features, Bag of Features for building an image descriptor, and Support Vector Machines for training a classifier. When evaluated over images containing buds of at least 100 pixels in diameter, the approach achieves a recall higher than 0.9 and a precision of 0.86 over all windowed-patches covering the whole bud and down to 60% of it, and scaled up to window patches containing a proportion of 20%-80% of bud versus background pixels. This robustness on the position and size of the window demonstrates its viability for use as the classification stage in a scanning-window detection algorithms.",
"title": ""
},
{
"docid": "neg:1840148_4",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "neg:1840148_5",
"text": "We propose an algorithm to separate simultaneously speaking persons from each other, the “cocktail party problem”, using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and decorrelating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.",
"title": ""
},
{
"docid": "neg:1840148_6",
"text": "In ophthalmic artery occlusion by hyaluronic acid injection, the globe may get worse by direct intravitreal administration of hyaluronidase. Retrograde cannulation of the ophthalmic artery may have the potential for restoration of retinal perfusion and minimizing the risk of phthisis bulbi. The study investigated the feasibility of cannulation of the ophthalmic artery for retrograde injection. In 10 right orbits of 10 cadavers, cannulation and ink injection of the supraorbital artery in the supraorbital approach were performed under surgical loupe magnification. In 10 left orbits, the medial upper lid was curvedly incised to retrieve the retroseptal ophthalmic artery for cannulation by a transorbital approach. Procedural times were recorded. Diameters of related arteries were bilaterally measured for comparison. Dissections to verify dye distribution were performed. Cannulation was successfully performed in 100 % and 90 % of the transorbital and the supraorbital approaches, respectively. The transorbital approach was more practical to perform compared with the supraorbital approach due to a trend toward a short procedure time (18.4 ± 3.8 vs. 21.9 ± 5.0 min, p = 0.74). The postseptal ophthalmic artery exhibited a tortious course, easily retrieved and cannulated, with a larger diameter compared to the supraorbital artery (1.25 ± 0.23 vs. 0.84 ± 0.16 mm, p = 0.000). The transorbital approach is more practical than the supraorbital approach for retrograde cannulation of the ophthalmic artery. This study provides a reliable access route implication for hyaluronidase injection into the ophthalmic artery to salvage central retinal occlusion following hyaluronic acid injection. This journal requires that authors assign a level of evidence to each submission to which Evidence-Based Medicine rankings are applicable. This excludes Review Articles, Book Reviews, and manuscripts that concern Basic Science, Animal Studies, Cadaver Studies, and Experimental Studies. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .",
"title": ""
},
{
"docid": "neg:1840148_7",
"text": "Agricultural residues, such as lignocellulosic materials (LM), are the most attractive renewable bioenergy sources and are abundantly found in nature. Anaerobic digestion has been extensively studied for the effective utilization of LM for biogas production. Experimental investigation of physiochemical changes that occur during pretreatment is needed for developing mechanistic and effective models that can be employed for the rational design of pretreatment processes. Various-cutting edge pretreatment technologies (physical, chemical and biological) are being tested on the pilot scale. These different pretreatment methods are widely described in this paper, among them, microaerobic pretreatment (MP) has gained attention as a potential pretreatment method for the degradation of LM, which just requires a limited amount of oxygen (or air) supplied directly during the pretreatment step. MP involves microbial communities under mild conditions (temperature and pressure), uses fewer enzymes and less energy for methane production, and is probably the most promising and environmentally friendly technique in the long run. Moreover, it is technically and economically feasible to use microorganisms instead of expensive chemicals, biological enzymes or mechanical equipment. The information provided in this paper, will endow readers with the background knowledge necessary for finding a promising solution to methane production.",
"title": ""
},
{
"docid": "neg:1840148_8",
"text": "In this study, different S/D contacting options for lateral NWFET devices are benchmarked at 7nm node dimensions and beyond. Comparison is done at both DC and ring oscillator levels. It is demonstrated that implementing a direct contact to a fin made of Si/SiGe super-lattice results in 13% performance improvement. Also, we conclude that the integration of internal spacers between the NWs is a must for lateral NWFETs in order to reduce device parasitic capacitance.",
"title": ""
},
{
"docid": "neg:1840148_9",
"text": "In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.",
"title": ""
},
{
"docid": "neg:1840148_10",
"text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.",
"title": ""
},
{
"docid": "neg:1840148_11",
"text": "Human activity recognition (HAR) is an important research area in the fields of human perception and computer vision due to its wide range of applications. These applications include: intelligent video surveillance, ambient assisted living, human computer interaction, human-robot interaction, entertainment, and intelligent driving. Recently, with the emergence and successful deployment of deep learning techniques for image classification, researchers have migrated from traditional handcrafting to deep learning techniques for HAR. However, handcrafted representation-based approaches are still widely used due to some bottlenecks such as computational complexity of deep learning techniques for activity recognition. However, approaches based on handcrafted representation are not able to handle complex scenarios due to their limitations and incapability; therefore, resorting to deep learning-based techniques is a natural option. This review paper presents a comprehensive survey of both handcrafted and learning-based action representations, offering comparison, analysis, and discussions on these approaches. In addition to this, the well-known public datasets available for experimentations and important applications of HAR are also presented to provide further insight into the field. This is the first review paper of its kind which presents all these aspects of HAR in a single review article with comprehensive coverage of each part. Finally, the paper is concluded with important discussions and research directions in the domain of HAR.",
"title": ""
},
{
"docid": "neg:1840148_12",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "neg:1840148_13",
"text": "Gamification informally refers to making a system more game-like. More specifically, gamification denotes applying game mechanics to a non-game system. We theorize that gamification success depends on the game mechanics employed and their effects on user motivation and immersion. The proposed theory may be tested using an experiment or questionnaire study.",
"title": ""
},
{
"docid": "neg:1840148_14",
"text": "Mininet is network emulation software that allows launching a virtual network with switches, hosts and an SDN controller all with a single command on a single Linux kernel. It is a great way to start learning about SDN and Open-Flow as well as test SDN controller and SDN applications. Mininet can be used to deploy large networks on a single computer or virtual machine provided with limited resources. It is freely available open source software that emulates Open-Flow device and SDN controllers. Keywords— SDN, Mininet, Open-Flow, Python, Wireshark",
"title": ""
},
{
"docid": "neg:1840148_15",
"text": "We introduce a new statistical model for time series that iteratively segments data into regimes with approximately linear dynamics and learns the parameters of each of these linear regimes. This model combines and generalizes two of the most widely used stochastic time-series modelshidden Markov models and linear dynamical systemsand is closely related to models that are widely used in the control and econometrics literatures. It can also be derived by extending the mixture of experts neural network (Jacobs, Jordan, Nowlan, & Hinton, 1991) to its fully dynamical version, in which both expert and gating networks are recurrent. Inferring the posterior probabilities of the hidden states of this model is computationally intractable, and therefore the exact expectation maximization (EM) algorithm cannot be applied. However, we present a variational approximation that maximizes a lower bound on the log-likelihood and makes use of both the forward and backward recursions for hidden Markov models and the Kalman filter recursions for linear dynamical systems. We tested the algorithm on artificial data sets and a natural data set of respiration force from a patient with sleep apnea. The results suggest that variational approximations are a viable method for inference and learning in switching state-space models.",
"title": ""
},
{
"docid": "neg:1840148_16",
"text": "Recent trends in robot learning are to use trajectory-based optimal control techniques and reinforcement learning to scale complex robotic systems. On the one hand, increased computational power and multiprocessing, and on the other hand, probabilistic reinforcement learning methods and function approximation, have contributed to a steadily increasing interest in robot learning. Imitation learning has helped significantly to start learning with reasonable initial behavior. However, many applications are still restricted to rather lowdimensional domains and toy applications. Future work will have to demonstrate the continual and autonomous learning abilities, which were alluded to in the introduction.",
"title": ""
},
{
"docid": "neg:1840148_17",
"text": "Many blockchain-based cryptocurrencies such as Bitcoin and Ethereum use Nakamoto consensus protocol to reach agreement on the blockchain state between a network of participant nodes. The Nakamoto consensus protocol probabilistically selects a leader via a mining process which rewards network participants (or miners) to solve computational puzzles. Finding solutions for such puzzles requires an enormous amount of computation. Thus, miners often aggregate resources into pools and share rewards amongst all pool members via pooled mining protocol. Pooled mining helps reduce the variance of miners’ payoffs significantly and is widely adopted in popular cryptocurrencies. For example, as of this writing, more than 95% of mining power in Bitcoin emanates from 10 mining pools. Although pooled mining benefits miners, it severely degrades decentralization, since a centralized pool manager administers the pooling protocol. Furthermore, pooled mining increases the transaction censorship significantly since pool managers decide which transactions are included in blocks. Due to this widely recognized threat, the Bitcoin community has proposed an alternative called P2Pool which decentralizes the operations of the pool manager. However, P2Pool is inefficient, increases the variance of miners’ rewards, requires much more computation and bandwidth from miners, and has not gained wide adoption. In this work, we propose a new protocol design for a decentralized mining pool. Our protocol called SMARTPOOL shows how one can leverage smart contracts, which are autonomous agents themselves running on decentralized blockchains, to decentralize cryptocurrency mining. SMARTPOOL guarantees high security, low reward’s variance for miners and is cost-efficient. We implemented a prototype of SMARTPOOL as an Ethereum smart contract working as a decentralized mining pool for Bitcoin. We have deployed it on the Ethereum testnet and our experiments confirm that SMARTPOOL is efficient and ready for practical use.",
"title": ""
},
{
"docid": "neg:1840148_18",
"text": "Disentangling the effects of selection and influence is one of social science's greatest unsolved puzzles: Do people befriend others who are similar to them, or do they become more similar to their friends over time? Recent advances in stochastic actor-based modeling, combined with self-reported data on a popular online social network site, allow us to address this question with a greater degree of precision than has heretofore been possible. Using data on the Facebook activity of a cohort of college students over 4 years, we find that students who share certain tastes in music and in movies, but not in books, are significantly likely to befriend one another. Meanwhile, we find little evidence for the diffusion of tastes among Facebook friends-except for tastes in classical/jazz music. These findings shed light on the mechanisms responsible for observed network homogeneity; provide a statistically rigorous assessment of the coevolution of cultural tastes and social relationships; and suggest important qualifications to our understanding of both homophily and contagion as generic social processes.",
"title": ""
},
{
"docid": "neg:1840148_19",
"text": "The first RADAR patent was applied for by Christian Huelsmeyer on April 30, 1904 at the patent office in Berlin, Germany. He was motivated by a ship accident on the river Weser and called his experimental system ”Telemobiloscope”. In this chapter some important and modern topics in radar system design and radar signal processing will be discussed. Waveform design is one innovative topic where new results are available for special applications like automotive radar. Detection theory is a fundamental radar topic which will be discussed in this chapter for new range CFAR schemes which are essential for all radar systems. Target recognition has for many years been the dream of all radar engineers. New results for target classification will be discussed for some automotive radar sensors.",
"title": ""
}
] |
1840149 | Gust loading factor — past , present and future | [
{
"docid": "pos:1840149_0",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: Tracy.L.Kijewski.1@nd.edu 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "pos:1840149_1",
"text": "Most international codes and standards provide guidelines and procedures for assessing the along-wind effec structures. Despite their common use of the ‘‘gust loading factor’’ ~GLF! approach, sizeable scatter exists among the wind eff predicted by the various codes and standards under similar flow conditions. This paper presents a comprehensive assessment o of this scatter through a comparison of the along-wind loads and their effects on tall buildings recommended by major internation and standards. ASCE 7-98 ~United States !, AS1170.2-89~Australia!, NBC-1995~Canada!, RLB-AIJ-1993 ~Japan!, and Eurocode-1993 ~Europe! are examined in this study. The comparisons consider the definition of wind characteristics, mean wind loads, GLF, eq static wind loads, and attendant wind load effects. It is noted that the scatter in the predicted wind loads and their effects arises from the variations in the definition of wind field characteristics in the respective codes and standards. A detailed example is pre illustrate the overall comparison and to highlight the main findings of this paper. DOI: 10.1061/ ~ASCE!0733-9445~2002!128:6~788! CE Database keywords: Buildings, highrise; Building codes; Wind loads; Dynamics; Wind velocity.",
"title": ""
},
{
"docid": "pos:1840149_2",
"text": "The aerodynamic admittance function (AAF) has been widely invoked to relate wind pressures on building surfaces to the oncoming wind velocity. In current practice, strip and quasi-steady theories are generally employed in formulating wind effects in the along-wind direction. These theories permit the representation of the wind pressures on building surfaces in terms of the oncoming wind velocity field. Synthesis of the wind velocity field leads to a generalized wind load that employs the AAF. This paper reviews the development of the current AAF in use. It is followed by a new definition of the AAF, which is based on the base bending moment. It is shown that the new AAF is numerically equivalent to the currently used AAF for buildings with linear mode shape and it can be derived experimentally via high frequency base balance. New AAFs for square and rectangular building models were obtained and compared with theoretically derived expressions. Some discrepancies between experimentally and theoretically derived AAFs in the high frequency range were noted.",
"title": ""
},
{
"docid": "pos:1840149_3",
"text": "Under the action of wind, tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. While the alongwind loads have been successfully treated using quasi-steady and strip theories in terms of gust loading factors, the acrosswind and torsional loads cannot be treated in this manner, since these loads cannot be related in a straightforward manner to the fluctuations in the approach flow. Accordingly, most current codes and standards provide little guidance for the acrosswind and torsional response. To fill this gap, a preliminary, interactive database of aerodynamic loads is presented, which can be accessed by any user with Microsoft Explorer at the URL address http://www.nd.edu/;nathaz/. The database is comprised of high-frequency base balance measurements on a host of isolated tall building models. Combined with the analysis procedure provided, the nondimensional aerodynamic loads can be used to compute the wind-induced response of tall buildings. The influence of key parameters, such as the side ratio, aspect ratio, and turbulence characteristics for rectangular sections, is also discussed. The database and analysis procedure are viable candidates for possible inclusion as a design guide in the next generation of codes and standards. DOI: 10.1061/~ASCE!0733-9445~2003!129:3~394! CE Database keywords: Aerodynamics; Wind loads; Wind tunnels; Databases; Random vibration; Buildings, high-rise; Turbulence. 394 / JOURNAL OF STRUCTURAL ENGINEERING © ASCE / MARCH 2003 tic model tests are presently used as routine tools in commercial design practice. However, considering the cost and lead time needed for wind tunnel testing, a simplified procedure would be desirable in the preliminary design stages, allowing early assessment of the structural resistance, evaluation of architectural or structural changes, or assessment of the need for detailed wind tunnel tests. Two kinds of wind tunnel-based procedures have been introduced in some of the existing codes and standards to treat the acrosswind and torsional response. The first is an empirical expression for the wind-induced acceleration, such as that found in the National Building Code of Canada ~NBCC! ~NRCC 1996!, while the second is an aerodynamic-load-based procedure such as those in Australian Standard ~AS 1989! and the Architectural Institute of Japan ~AIJ! Recommendations ~AIJ 1996!. The latter approach offers more flexibility as the aerodynamic load provided can be used to determine the response of any structure having generally the same architectural features and turbulence environment of the tested model, regardless of its structural characteristics. Such flexibility is made possible through the use of well-established wind-induced response analysis procedures. Meanwhile, there are some databases involving isolated, generic building shapes available in the literature ~e.g., Kareem 1988; Choi and Kanda 1993; Marukawa et al. 1992!, which can be expanded using HFBB tests. For example, a number of commercial wind tunnel facilities have accumulated data of actual buildings in their natural surroundings, which may be used to supplement the overall loading database. Though such HFBB data has been collected, it has not been assimilated and made accessible to the worldwide community, to fully realize its potential. Fortunately, the Internet now provides the opportunity to pool and archive the international stores of wind tunnel data. This paper takes the first step in that direction by introducing an interactive database of aerodynamic loads obtained from HFBB measurements on a host of isolated tall building models, accessible to the worldwide Internet community via Microsoft Explorer at the URL address http://www.nd.edu/;nathaz. Through the use of this interactive portal, users can select the Engineer, Malouf Engineering International, Inc., 275 W. Campbell Rd., Suite 611, Richardson, TX 75080; Fomerly, Research Associate, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: yzhou@nd.edu Graduate Student, NatHaz Modeling Laboratory, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: tkijewsk@nd.edu Robert M. Moran Professor, Dept. of Civil Engineering and Geological Sciences, Univ. of Notre Dame, Notre Dame, IN 46556. E-mail: kareem@nd.edu. Note. Associate Editor: Bogusz Bienkiewicz. Discussion open until August 1, 2003. Separate discussions must be submitted for individual papers. To extend the closing date by one month, a written request must be filed with the ASCE Managing Editor. The manuscript for this paper was submitted for review and possible publication on April 24, 2001; approved on December 11, 2001. This paper is part of the Journal of Structural Engineering, Vol. 129, No. 3, March 1, 2003. ©ASCE, ISSN 0733-9445/2003/3-394–404/$18.00. Introduction Under the action of wind, typical tall buildings oscillate simultaneously in the alongwind, acrosswind, and torsional directions. It has been recognized that for many high-rise buildings the acrosswind and torsional response may exceed the alongwind response in terms of both serviceability and survivability designs ~e.g., Kareem 1985!. Nevertheless, most existing codes and standards provide only procedures for the alongwind response and provide little guidance for the critical acrosswind and torsional responses. This is partially attributed to the fact that the acrosswind and torsional responses, unlike the alongwind, result mainly from the aerodynamic pressure fluctuations in the separated shear layers and wake flow fields, which have prevented, to date, any acceptable direct analytical relation to the oncoming wind velocity fluctuations. Further, higher-order relationships may exist that are beyond the scope of the current discussion ~Gurley et al. 2001!. Wind tunnel measurements have thus served as an effective alternative for determining acrosswind and torsional loads. For example, the high-frequency base balance ~HFBB! and aeroelasgeometry and dimensions of a model building, from the available choices, and specify an urban or suburban condition. Upon doing so, the aerodynamic load spectra for the alongwind, acrosswind, or torsional response is displayed along with a Java interface that permits users to specify a reduced frequency of interest and automatically obtain the corresponding spectral value. When coupled with the concise analysis procedure, discussion, and example provided, the database provides a comprehensive tool for computation of the wind-induced response of tall buildings. Wind-Induced Response Analysis Procedure Using the aerodynamic base bending moment or base torque as the input, the wind-induced response of a building can be computed using random vibration analysis by assuming idealized structural mode shapes, e.g., linear, and considering the special relationship between the aerodynamic moments and the generalized wind loads ~e.g., Tschanz and Davenport 1983; Zhou et al. 2002!. This conventional approach yields only approximate estimates of the mode-generalized torsional moments and potential inaccuracies in the lateral loads if the sway mode shapes of the structure deviate significantly from the linear assumption. As a result, this procedure often requires the additional step of mode shape corrections to adjust the measured spectrum weighted by a linear mode shape to the true mode shape ~Vickery et al. 1985; Boggs and Peterka 1989; Zhou et al. 2002!. However, instead of utilizing conventional generalized wind loads, a base-bendingmoment-based procedure is suggested here for evaluating equivalent static wind loads and response. As discussed in Zhou et al. ~2002!, the influence of nonideal mode shapes is rather negligible for base bending moments, as opposed to other quantities like base shear or generalized wind loads. As a result, base bending moments can be used directly, presenting a computationally efficient scheme, averting the need for mode shape correction and directly accommodating nonideal mode shapes. Application of this procedure for the alongwind response has proven effective in recasting the traditional gust loading factor approach in a new format ~Zhou et al. 1999; Zhou and Kareem 2001!. The procedure can be conveniently adapted to the acrosswind and torsional response ~Boggs and Peterka 1989; Kareem and Zhou 2003!. It should be noted that the response estimation based on the aerodynamic database is not advocated for acrosswind response calculations in situations where the reduced frequency is equal to or slightly less than the Strouhal number ~Simiu and Scanlan 1996; Kijewski et al. 2001!. In such cases, the possibility of negative aerodynamic damping, a manifestation of motion-induced effects, may cause the computed results to be inaccurate ~Kareem 1982!. Assuming a stationary Gaussian process, the expected maximum base bending moment response in the alongwind or acrosswind directions or the base torque response can be expressed in the following form:",
"title": ""
}
] | [
{
"docid": "neg:1840149_0",
"text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.",
"title": ""
},
{
"docid": "neg:1840149_1",
"text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).",
"title": ""
},
{
"docid": "neg:1840149_2",
"text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.",
"title": ""
},
{
"docid": "neg:1840149_3",
"text": "OBJECTIVE\nIn this study, we explored the impact of an occupational therapy wellness program on daily habits and routines through the perspectives of youth and their parents.\n\n\nMETHOD\nData were collected through semistructured interviews with children and their parents, the Pizzi Healthy Weight Management Assessment(©), and program activities.\n\n\nRESULTS\nThree themes emerged from the interviews: Program Impact, Lessons Learned, and Time as a Barrier to Health. The most common areas that both youth and parents wanted to change were time spent watching television and play, fun, and leisure time. Analysis of activity pie charts indicated that the youth considerably increased active time in their daily routines from Week 1 to Week 6 of the program.\n\n\nCONCLUSION\nAn occupational therapy program focused on health and wellness may help youth and their parents be more mindful of their daily activities and make health behavior changes.",
"title": ""
},
{
"docid": "neg:1840149_4",
"text": "This paper presents a compact 10-bit digital-to-analog converter (DAC) for LCD source drivers. The cyclic DAC architecture is used to reduce the area of LCD column drivers when compared to the use of conventional resistor-string DACs. The current interpolation technique is proposed to perform gamma correction after D/A conversion. The gamma correction circuit is shared by four DAC channels using the interleave technique. A prototype 10-bit DAC with gamma correction function is implemented in 0.35 μm CMOS technology and its average die size per channel is 0.053 mm2, which is smaller than those of the R-DACs with gamma correction function. The settling time of the 10-bit DAC is 1 μs, and the maximum INL and DNL are 2.13 least significant bit (LSB) and 1.30 LSB, respectively.",
"title": ""
},
{
"docid": "neg:1840149_5",
"text": "This article presents a paradigm case portrait of female romantic partners of heavy pornography users. Based on a sample of 100 personal letters, this portrait focuses on their often traumatic discovery of the pornography usage and the significance they attach to this usage for (a) their relationships, (b) their own worth and desirability, and (c) the character of their partners. Finally, we provide a number of therapeutic recommendations for helping these women to think and act more effectively in their very difficult circumstances.",
"title": ""
},
{
"docid": "neg:1840149_6",
"text": "Object detection methods like Single Shot Multibox Detector (SSD) provide highly accurate object detection that run in real-time. However, these approaches require a large number of annotated training images. Evidently, not all of these images are equally useful for training the algorithms. Moreover, obtaining annotations in terms of bounding boxes for each image is costly and tedious. In this paper, we aim to obtain a highly accurate object detector using only a fraction of the training images. We do this by adopting active learning that uses ‘human in the loop’ paradigm to select the set of images that would be useful if annotated. Towards this goal, we make the following contributions: 1. We develop a novel active learning method which poses the layered architecture used in object detection as a ‘query by committee’ paradigm to choose the set of images to be queried. 2. We introduce a framework to use the exploration/exploitation trade-off in our methods. 3. We analyze the results on standard object detection datasets which show that with only a third of the training data, we can obtain more than 95% of the localization accuracy of full supervision. Further our methods outperform classical uncertainty-based active learning algorithms like maximum entropy.",
"title": ""
},
{
"docid": "neg:1840149_7",
"text": "This paper proposes a method for designing a sentence set for utterances taking account of prosody. This method is based on a measure of coverage which incorporates two factors: (1) the distribution of voice fundamental frequency and phoneme duration predicted by the prosody generation module of a TTS; (2) perceptual damage to naturalness due to prosody modification. A set of 500 sentences with a predicted coverage of 82.6% was designed by this method, and used to collect a speech corpus. The obtained speech corpus yielded 88% of the predicted coverage. The data size was reduced to 49% in terms of number of sentences (89% in terms of number of phonemes) compared to a general-purpose corpus designed without taking prosody into account.",
"title": ""
},
{
"docid": "neg:1840149_8",
"text": "Millimeter-wave reconfigurable antennas are predicted as a future of next generation wireless networks with the availability of wide bandwidth. A coplanar waveguide (CPW) fed T-shaped frequency reconfigurable millimeter-wave antenna for 5G networks is presented. The resonant frequency is varied to obtain the 10dB return loss bandwidth in the frequency range of 23-29GHz by incorporating two variable resistors. The radiation pattern contributes two symmetrical radiation beams at approximately ±30o along the end fire direction. The 3dB beamwidth remains conserved over the entire range of operating bandwidth. The proposed antenna targets the applications of wireless systems operating in narrow passages, corridors, mine tunnels, and person-to-person body centric applications.",
"title": ""
},
{
"docid": "neg:1840149_9",
"text": "The growing problem of unsolicited bulk e-mail, also known as “spam”, has generated a need for reliable anti-spam e-mail filters. Filters of this type have so far been based mostly on manually constructed keyword patterns. An alternative approach has recently been proposed, whereby a Naive Bayesian classifier is trained automatically to detect spam messages. We test this approach on a large collection of personal e-mail messages, which we make publicly available in “encrypted” form contributing towards standard benchmarks. We introduce appropriate cost-sensitive measures, investigating at the same time the effect of attribute-set size, training-corpus size, lemmatization, and stop lists, issues that have not been explored in previous experiments. Finally, the Naive Bayesian filter is compared, in terms of performance, to a filter that uses keyword patterns, and which is part of a widely used e-mail reader.",
"title": ""
},
{
"docid": "neg:1840149_10",
"text": "Traffic flow prediction is an essential function of traffic information systems. Conventional approaches, using artificial neural networks with narrow network architecture and poor training samples for supervised learning, have been only partially successful. In this paper, a deep-learning neural-network based on TensorFlow™ is suggested for the prediction traffic flow conditions, using real-time traffic data. Until now, no research has applied the TensorFlow™ deep learning neural network model to the estimation of traffic conditions. The suggested supervised model is trained by a deep learning algorithm, which uses real traffic data aggregated every five minutes. Results demonstrate that the model's accuracy rate is around 99%.",
"title": ""
},
{
"docid": "neg:1840149_11",
"text": "This study evaluates various evolutionary search methods to direct neural controller evolution in company with policy (behavior) transfer across increasingly complex collective robotic (RoboCup keep-away) tasks. Robot behaviors are first evolved in a source task and then transferred for further evolution to more complex target tasks. Evolutionary search methods tested include objective-based search (fitness function), behavioral and genotypic diversity maintenance, and hybrids of such diversity maintenance and objective-based search. Evolved behavior quality is evaluated according to effectiveness and efficiency. Effectiveness is the average task performance of transferred and evolved behaviors, where task performance is the average time the ball is controlled by a keeper team. Efficiency is the average number of generations taken for the fittest evolved behaviors to reach a minimum task performance threshold given policy transfer. Results indicate that policy transfer coupled with hybridized evolution (behavioral diversity maintenance and objective-based search) addresses the bootstrapping problem for increasingly complex keep-away tasks. That is, this hybrid method (coupled with policy transfer) evolves behaviors that could not otherwise be evolved. Also, this hybrid evolutionary search was demonstrated as consistently evolving topologically simple neural controllers that elicited high-quality behaviors.",
"title": ""
},
{
"docid": "neg:1840149_12",
"text": "We extend Fano’s inequality, which controls the average probability of events in terms of the average of some f–divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary [0, 1]–valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning. MSC 2000 subject classifications. Primary-62B10; secondary-62F15, 68T05.",
"title": ""
},
{
"docid": "neg:1840149_13",
"text": "Categorization is a vitally important skill that people use every day. Early theories of category learning assumed a single learning system, but recent evidence suggests that human category learning may depend on many of the major memory systems that have been hypothesized by memory researchers. As different memory systems flourish under different conditions, an understanding of how categorization uses available memory systems will improve our understanding of a basic human skill, lead to better insights into the cognitive changes that result from a variety of neurological disorders, and suggest improvements in training procedures for complex categorization tasks.",
"title": ""
},
{
"docid": "neg:1840149_14",
"text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.",
"title": ""
},
{
"docid": "neg:1840149_15",
"text": "Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.",
"title": ""
},
{
"docid": "neg:1840149_16",
"text": "Recommender systems base their operation on past user ratings over a collection of items, for instance, books, CDs, etc. Collaborative filtering (CF) is a successful recommendation technique that confronts the ‘‘information overload’’ problem. Memory-based algorithms recommend according to the preferences of nearest neighbors, and model-based algorithms recommend by first developing a model of user ratings. In this paper, we bring to surface factors that affect CF process in order to identify existing false beliefs. In terms of accuracy, by being able to view the ‘‘big picture’’, we propose new approaches that substantially improve the performance of CF algorithms. For instance, we obtain more than 40% increase in precision in comparison to widely-used CF algorithms. In terms of efficiency, we propose a model-based approach based on latent semantic indexing (LSI), that reduces execution times at least 50% than the classic",
"title": ""
},
{
"docid": "neg:1840149_17",
"text": "We conducted two pilot studies to select the appropriate e-commerce website type and contents for the homepage stimuli. The purpose of Pilot Study 1 was to select a website category with which subjects are not familiar, for which they show neither liking nor disliking, but have some interests in browsing. Unfamiliarity with the website was required because familiarity with a certain category of website may influence perceived complexity of (Radocy and Boyle 1988) and liking for the webpage stimuli (Bornstein 1989; Zajonc 2000). We needed a website for which subjects showed neither liking nor disliking so that the manipulation of webpage stimuli in the experiment could be assumed to be the major influence on their reported emotional responses and approach tendencies. To have some degree of interest in browsing the website is necessary for subjects to engage in experiential web-browsing activities with the webpage stimuli. Based on the results of Pilot Study 1, we selected the gifts website as the context for the experimental stimuli. Then, we conducted Pilot Study 2 to identify appropriate gift items to be included in the webpage stimuli. Thirteen gift items, which were shown to elicit neutral affect in the subjects and to be of some interest to the subjects for browsing or purchase, were selected for the website.",
"title": ""
},
{
"docid": "neg:1840149_18",
"text": "We introduce a novel method for approximate alignment of point-based surfaces. Our approach is based on detecting a set of salient feature points using a scale-space representation. For each feature point we compute a signature vector that is approximately invariant under rigid transformations. We use the extracted signed feature set in order to obtain approximate alignment of two surfaces. We apply our method for the automatic alignment of multiple scans using both scan-to-scan and scan-to-model matching capabilities.",
"title": ""
},
{
"docid": "neg:1840149_19",
"text": "There is a growing interest in the use of chronic deep brain stimulation (DBS) for the treatment of medically refractory movement disorders and other neurological and psychiatric conditions. Fundamental questions remain about the physiologic effects and safety of DBS. Previous basic research studies have focused on the direct polarization of neuronal membranes by electrical stimulation. The goal of this paper is to provide information on the thermal effects of DBS using finite element models to investigate the magnitude and spatial distribution of DBS induced temperature changes. The parameters investigated include: stimulation waveform, lead selection, brain tissue electrical and thermal conductivity, blood perfusion, metabolic heat generation during the stimulation. Our results show that clinical deep brain stimulation protocols will increase the temperature of surrounding tissue by up to 0.8degC depending on stimulation/tissue parameters",
"title": ""
}
] |
1840150 | Making Learning and Web 2.0 Technologies Work for Higher Learning Institutions in Africa | [
{
"docid": "pos:1840150_0",
"text": "Proponents have marketed e-learning by focusing on its adoption as the right thing to do while disregarding, among other things, the concerns of the potential users, the adverse effects on users and the existing research on the use of e-learning or related innovations. In this paper, the e-learning-adoption proponents are referred to as the technopositivists. It is argued that most of the technopositivists in the higher education context are driven by a personal agenda, with the aim of propagating a technopositivist ideology to stakeholders. The technopositivist ideology is defined as a ‘compulsive enthusiasm’ about e-learning in higher education that is being created, propagated and channelled repeatedly by the people who are set to gain without giving the educators the time and opportunity to explore the dangers and rewards of e-learning on teaching and learning. Ten myths on e-learning that the technopositivists have used are presented with the aim of initiating effective and constructive dialogue, rather than merely criticising the efforts being made. Introduction The use of technology, and in particular e-learning, in higher education is becoming increasingly popular. However, Guri-Rosenblit (2005) and Robertson (2003) propose that educational institutions should step back and reflect on critical questions regarding the use of technology in teaching and learning. The focus of Guri-Rosenblit’s article is on diverse issues of e-learning implementation in higher education, while Robertson focuses on the teacher. Both papers show that there is a change in the ‘euphoria towards eLearning’ and that a dose of techno-negativity or techno-scepticism is required so that the gap between rhetoric in the literature (with all the promises) and actual implementation can be bridged for an informed stance towards e-learning adoption. British Journal of Educational Technology Vol 41 No 2 2010 199–212 doi:10.1111/j.1467-8535.2008.00910.x © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Technology in teaching and learning has been marketed or presented to its intended market with a lot of promises, benefits and opportunities. This technopositivist ideology has denied educators and educational researchers the much needed opportunities to explore the motives, power, rewards and sanctions of information and communication technologies (ICTs), as well as time to study the impacts of the new technologies on learning and teaching. Educational research cannot cope with the speed at which technology is advancing (Guri-Rosenblit, 2005; Robertson, 2003; Van Dusen, 1998; Watson, 2001). Indeed there has been no clear distinction between teaching with and teaching about technology and therefore the relevance of such studies has not been brought to the fore. Much of the focus is on the actual educational technology as it advances, rather than its educational functions or the effects it has on the functions of teaching and learning. The teaching profession has been affected by the implementation and use of ICT through these optimistic views, and the ever-changing teaching and learning culture (Kompf, 2005; Robertson, 2003). It is therefore necessary to pause and ask the question to the technopositivist ideologists: whether in e-learning the focus is on the ‘e’ or on the learning. The opportunities and dangers brought about by the ‘e’ in e-learning should be soberly examined. As Gandolfo (1998, p. 24) suggests: [U]ndoubtedly, there is opportunity; the effective use of technology has the potential to improve and enhance learning. Just as assuredly there is the danger that the wrong headed adoption of various technologies apart from a sound grounding in educational research and practice will result, and indeed in some instances has already resulted, in costly additions to an already expensive enterprise without any value added. That is, technology applications must be consonant with what is known about the nature of learning and must be assessed to ensure that they are indeed enhancing learners’ experiences. Technopositivist ideology is a ‘compulsory enthusiasm’ about technology that is being created, propagated and channelled repeatedly by the people who stand to gain either economically, socially, politically or otherwise in due disregard of the trade-offs associated with the technology to the target audience (Kompf, 2005; Robertson, 2003). In e-learning, the beneficiaries of the technopositivist market are doing so by presenting it with promises that would dismiss the judgement of many. This is aptly illustrated by Robertson (2003, pp. 284–285): Information technology promises to deliver more (and more important) learning for every student accomplished in less time; to ensure ‘individualization’ no matter how large and diverse the class; to obliterate the differences and disadvantages associated with race, gender, and class; to vary and yet standardize the curriculum; to remove subjectivity from student evaluation; to make reporting and record keeping a snap; to keep discipline problems to a minimum; to enhance professional learning and discourse; and to transform the discredited teacher-centered classroom into that paean of pedagogy: the constructivist, student-centered classroom, On her part, Guri-Rosenblit (2005, p. 14) argues that the proponents and marketers of e-learning present it as offering multiple uses that do not have a clear relationship with a current or future problem. She asks two ironic, vital and relevant questions: ‘If it ain’t broken, why fix it?’ and ‘Technology is the answer—but what are the questions?’ The enthusiasm to use technology for endless possibilities has led to the belief that providing 200 British Journal of Educational Technology Vol 41 No 2 2010 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. information automatically leads to meaningful knowledge creation; hence blurring and confusing the distinction between information and knowledge. This is one of the many misconceptions that emerged with e-learning. There has been a great deal of confusion both in the marketing of and language used in the advocating of the ICTs in teaching and learning. As an example, Guri-Rosenblit (2005, p. 6) identified a list of 15 words used to describe the environment for teaching and learning with technology from various studies: ‘web-based learning, computermediated instruction, virtual classrooms, online education, e-learning, e-education, computer-driven interactive communication, open and distance learning, I-Campus, borderless education, cyberspace learning environments, distributed learning, flexible learning, blended learning, mobile-learning’. The list could easily be extended with many more words. Presented with this array of words, most educators are not sure of what e-learning is. Could it be synonymous to distance education? Is it just the use of online tools to enhance or enrich the learning experiences? Is it stashing the whole courseware or parts of it online for students to access? Or is it a new form of collaborative or cooperative learning? Clearly, any of these questions could be used to describe an aspect of e-learning and quite often confuse the uninformed educator. These varied words, with as many definitions, show the degree to which e-learning is being used in different cultures and in different organisations. Unfortunately, many of these uses are based on popular assumptions and myths. While the myths that will be discussed in this paper are generic, and hence applicable to e-learning use in most cultures and organisations, the paper’s focus is on higher education, because it forms part of a larger e-learning research project among higher education institutions (HEIs) and also because of the popularity of e-learning use in HEIs. Although there is considerable confusion around the term e-learning, for the purpose of this paper it will be considered as referring to the use of electronic technology and content in teaching and learning. It includes, but is not limited to, the use of the Internet; television; streaming video and video conferencing; online text and multimedia; and mobile technologies. From the nomenclature, also comes the crafting of the language for selling the technologies to the educators. Robertson (2003, p. 280) shows the meticulous choice of words by the marketers where ‘research’ is transformed into a ‘belief system’ and the past tense (used to communicate research findings) is substituted for the present and future tense, for example “Technology ‘can and will’ rather than ‘has and does’ ” in a quote from Apple’s comment: ‘At Apple, we believe the effective integration of technology into classroom instruction can and will result in higher levels of student achievement’. Similar quotes are available in the market and vendors of technology products for teaching and learning. This, however, is not limited to the market; some researchers have used similar quotes: ‘It is now conventional wisdom that those countries which fail to move from the industrial to the Information Society will not be able to compete in the globalised market system made possible by the new technologies’ (Mac Keogh, 2001, p. 223). The role of research should be to question the conventional wisdom or common sense and offer plausible answers, rather than dancing to the fine tunes of popular or mass e-Learning myths 201 © 2008 The Authors. Journal compilation © 2008 British Educational Communications and Technology Agency. wisdom. It is also interesting to note that Mac Keogh (2001, p. 233) concludes that ‘[w]hen issues other than costs and performance outcomes are considered, the rationale for introducing ICTs in education is more powerful’. Does this mean that irrespective of whether ICTs",
"title": ""
}
] | [
{
"docid": "neg:1840150_0",
"text": "Despite some notable and rare exceptions and after many years of relatively neglect (particularly in the ‘upper echelons’ of IS research), there appears to be some renewed interest in Information Systems Ethics (ISE). This paper reflects on the development of ISE by assessing the use and development of ethical theory in contemporary IS research with a specific focus on the ‘leading’ IS journals (according to the Association of Information Systems). The focus of this research is to evaluate if previous calls for more theoretically informed work are permeating the ‘upper echelons’ of IS research and if so, how (Walsham 1996; Smith and Hasnas 1999; Bell and Adam 2004). For the purposes of scope, this paper follows on from those previous studies and presents a detailed review of the leading IS publications between 2005to2007 inclusive. After several processes, a total of 32 papers are evaluated. This review highlights that whilst ethical topics are becoming increasingly popular in such influential media, most of the research continues to neglect considerations of ethical theory with preferences for a range of alternative approaches. Finally, this research focuses on some of the papers produced and considers how the use of ethical theory could contribute.",
"title": ""
},
{
"docid": "neg:1840150_1",
"text": "Event recognition systems rely on knowledge bases of event definitions to infer occurrences of events in time. Using a logical framework for representing and reasoning about events offers direct connections to machine learning, via Inductive Logic Programming (ILP), thus allowing to avoid the tedious and error-prone task of manual knowledge construction. However, learning temporal logical formalisms, which are typically utilized by logic-based event recognition systems is a challenging task, which most ILP systems cannot fully undertake. In addition, event-based data is usually massive and collected at different times and under various circumstances. Ideally, systems that learn from temporal data should be able to operate in an incremental mode, that is, revise prior constructed knowledge in the face of new evidence. In this work we present an incremental method for learning and revising event-based knowledge, in the form of Event Calculus programs. The proposed algorithm relies on abductive–inductive learning and comprises a scalable clause refinement methodology, based on a compressive summarization of clause coverage in a stream of examples. We present an empirical evaluation of our approach on real and synthetic data from activity recognition and city transport applications.",
"title": ""
},
{
"docid": "neg:1840150_2",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "neg:1840150_3",
"text": "We describe the large vocabulary automatic speech recognition system developed for Modern Standard Arabic by the SRI/Nightingale team, and used for the 2007 GALE evaluation as part of the speech translation system. We show how system performance is affected by different development choices, ranging from text processing and lexicon to decoding system architecture design. Word error rate results are reported on broadcast news and conversational data from the GALE development and evaluation test sets.",
"title": ""
},
{
"docid": "neg:1840150_4",
"text": "A framework for clustered-dot color halftone watermarking is proposed. Watermark patterns are embedded in the color halftone on per-separation basis. For typical CMYK printing systems, common desktop RGB color scanners are unable to provide the individual colorant halftone separations, which confounds per-separation detection methods. Not only does the K colorant consistently appear in the scanner channels as it absorbs uniformly across the spectrum, but cross-couplings between CMY separations are also observed in the scanner color channels due to unwanted absorptions. We demonstrate that by exploiting spatial frequency and color separability of clustered-dot color halftones, estimates of the individual colorant halftone separations can be obtained from scanned RGB images. These estimates, though not perfect, allow per-separation detection to operate efficiently. The efficacy of this methodology is demonstrated using continuous phase modulation for the embedding of per-separation watermarks.",
"title": ""
},
{
"docid": "neg:1840150_5",
"text": "Touch gestures can be a very important aspect when developing mobile applications with enhanced reality. The main purpose of this research was to determine which touch gestures were most frequently used by engineering students when using a simulation of a projectile motion in a mobile AR applica‐ tion. A randomized experimental design was given to students, and the results showed the most commonly used gestures to visualize are: zoom in “pinch open”, zoom out “pinch closed”, move “drag” and spin “rotate”.",
"title": ""
},
{
"docid": "neg:1840150_6",
"text": "The current generation of manufacturing systems relies on monolithic control software which provides real-time guarantees but is hard to adapt and reuse. These qualities are becoming increasingly important for meeting the demands of a global economy. Ongoing research and industrial efforts therefore focus on service-oriented architectures (SOA) to increase the control software’s flexibility while reducing development time, effort and cost. With such encapsulated functionality, system behavior can be expressed in terms of operations on data and the flow of data between operators. In this thesis we consider industrial real-time systems from the perspective of distributed data processing systems. Data processing systems often must be highly flexible, which can be achieved by a declarative specification of system behavior. In such systems, a user expresses the properties of an acceptable solution while the system determines a suitable execution plan that meets these requirements. Applied to the real-time control domain, this means that the user defines an abstract workflow model with global timing constraints from which the system derives an execution plan that takes the underlying system environment into account. The generation of a suitable execution plan often is NP-hard and many data processing systems rely on heuristic solutions to quickly generate high quality plans. We utilize heuristics for finding real-time execution plans. Our evaluation shows that heuristics were successful in finding a feasible execution plan in 99% of the examined test cases. Lastly, data processing systems are engineered for an efficient exchange of data and therefore are usually built around a direct data flow between the operators without a mediating entity in between. Applied to SOA-based automation, the same principle is realized through service choreographies with direct communication between the individual services instead of employing a service orchestrator which manages the invocation of all services participating in a workflow. These three principles outline the main contributions of this thesis: A flexible reconfiguration of SOA-based manufacturing systems with verifiable real-time guarantees, fast heuristics based planning, and a peer-to-peer execution model for SOAs with clear semantics. We demonstrate these principles within a demonstrator that is close to a real-world industrial system.",
"title": ""
},
{
"docid": "neg:1840150_7",
"text": "An ultra-wideband transition from microstrip to stripline in PCB technology is presented applying only through via holes for simple fabrication. The design is optimized using full-wave EM simulations. A prototype is manufactured and measured achieving a return loss better than 8.7dB and an insertion loss better than 1.2 dB in the FCC frequency range. A meander-shaped delay line in stripline technique is presented as an example of application.",
"title": ""
},
{
"docid": "neg:1840150_8",
"text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.",
"title": ""
},
{
"docid": "neg:1840150_9",
"text": "Extracting opinion expressions from text is an essential task of sentiment analysis, which is usually treated as one of the word-level sequence labeling problems. In such problems, compositional models with multiplicative gating operations provide efficient ways to encode the contexts, as well as to choose critical information. Thus, in this paper, we adopt Long Short-Term Memory (LSTM) recurrent neural networks to address the task of opinion expression extraction and explore the internal mechanisms of the model. The proposed approach is evaluated on the Multi-Perspective Question Answering (MPQA) opinion corpus. The experimental results demonstrate improvement over previous approaches, including the state-of-the-art method based on simple recurrent neural networks. We also provide a novel micro perspective to analyze the run-time processes and gain new insights into the advantages of LSTM selecting the source of information with its flexible connections and multiplicative gating operations.",
"title": ""
},
{
"docid": "neg:1840150_10",
"text": "Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. Image Edge detection significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since edge detection is in the forefront of image processing for object detection, it is crucial to have a good understanding of edge detection algorithms. In this paper the comparative analysis of various Image Edge Detection techniques is presented. The software is developed using MATLAB 7.0. It has been shown that the Canny s edge detection algorithm performs better than all these operators under almost all scenarios. Evaluation of the images showed that under noisy conditions Canny, LoG( Laplacian of Gaussian), Robert, Prewitt, Sobel exhibit better performance, respectively. . It has been observed that Canny s edge detection algorithm is computationally more expensive compared to LoG( Laplacian of Gaussian), Sobel, Prewitt and Robert s operator CITED BY (354) 1 Gra a, R. F. P. S. O. (2012). Segmenta o de imagens tor cicas de Raio-X (Doctoral dissertation, UNIVERSIDADE DA BEIRA INTERIOR). 2 ZENDI, M., & YILMAZ, A. (2013). DEGISIK BAKIS A ILARINDAN ELDE EDILEN G R NT GRUPLARININ SINIFLANDIRILMASI. Journal of Aeronautics & Space Technolog ies/Havacilik ve Uzay Teknolojileri Derg is i, 6(1). 3 TROFINO, A. F. N. (2014). TRABALHO DE CONCLUS O DE CURSO. 4 Juan Albarrac n, J. (2011). Dise o, an lis is y optimizaci n de un s istema de reconocimiento de im genes basadas en contenido para imagen publicitaria (Doctoral dissertation). 5 Bergues, G., Ames, G., Canali, L., Schurrer, C., & Fles ia, A. G. (2014, June). Detecci n de l neas en im genes con ruido en un entorno de medici n de alta precis i n. In Biennial Congress of Argentina (ARGENCON), 2014 IEEE (pp. 582-587). IEEE. 6 Andrianto, D. S. (2013). Analisa Statistik terhadap perubahan beberapa faktor deteksi kemacetan melalui pemrosesan video beserta peng iriman notifikas i kemacetan. Jurnal Sarjana ITB bidang Teknik Elektro dan Informatika, 2(1). 7 Pier g , M., & Jaskowiec, J. Identyfikacja twarzy z wykorzystaniem Sztucznych Sieci Neuronowych oraz PCA. 8 Nugraha, K. A., Santoso, A. J., & Suselo, T. (2015, July). ALGORITMA BACKPROPAGATION PADA JARINGAN SARAF TIRUAN UNTUK PENGENALAN POLA WAYANG KULIT. In Seminar Nasional Informatika 2008 (Vol. 1, No. 4). 9 Cornet, T. (2012). Formation et D veloppement des Lacs de Titan: Interpr tation G omorpholog ique d'Ontario Lacus et Analogues Terrestres (Doctoral dissertation, Ecole Centrale de Nantes (ECN)(ECN)(ECN)(ECN)). 10 Li, L., Sun, L., Ning , G., & Tan, S. (2014). Automatic Pavement Crack Recognition Based on BP Neural Network. PROMET-Traffic&Transportation, 26(1), 11-22. 11 Quang Hong , N., Khanh Quoc, D., Viet Anh, N., Chien Van, T., ???, & ???. (2015). Rate Allocation for Block-based Compressive Sensing . Journal of Broadcast Eng ineering , 20(3), 398-407. 12 Swillo, S. (2013). Zastosowanie techniki wizyjnej w automatyzacji pomiar w geometrii i podnoszeniu jakosci wyrob w wytwarzanych w przemysle motoryzacyjnym. Prace Naukowe Politechniki Warszawskiej. Mechanika, (257), 3-128. 13 V zina, M. (2014). D veloppement de log iciels de thermographie infrarouge visant am liorer le contr le de la qualit de la pose de l enrob bitumineux. 14 Decourselle, T. (2014). Etude et mod lisation du comportement des gouttelettes de produits phytosanitaires sur les feuilles de vigne par imagerie ultra-rapide et analyse de texture (Doctoral dissertation, Univers it de Bourgogne). 15 Reja, I. D., & Santoso, A. J. (2013). Pengenalan Motif Sarung (Utan Maumere) Menggunakan Deteksi Tepi. Semantik, 3(1). 16 Feng , Y., & Chen, F. (2013). Fast volume measurement algorithm based on image edge detection. Journal of Computer Applications, 6, 064. 17 Krawczuk, A., & Dominczuk, J. (2014). The use of computer image analys is in determining adhesion properties . Applied Computer Science, 10(3), 68-77. 18 Hui, L., Park, M. W., & Brilakis , I. (2014). Automated Brick Counting for Fa ade Construction Progress Estimation. Journal of Computing in Civil Eng ineering , 04014091. 19 Mahmud, S., Mohammed, J., & Muaidi, H. (2014). A Survey of Dig ital Image Processing Techniques in Character Recognition. IJCSNS, 14(3), 65. 20 Yazdanparast, E., Dos Anjos , A., Garcia, D., Loeuillet, C., Shahbazkia, H. R., & Vergnes, B. (2014). INsPECT, an Open-Source and Versatile Software for Automated Quantification of (Leishmania) Intracellular Parasites . 21 Furtado, L. F. F., Trabasso, L. G., Villani, E., & Francisco, A. (2012, December). Temporal filter applied to image sequences acquired by an industrial robot to detect defects in large aluminum surfaces areas. In MECHATRONIKA, 2012 15th International Symposium (pp. 1-6). IEEE. 22 Zhang , X. H., Li, G., Li, C. L., Zhang , H., Zhao, J., & Hou, Z. X. (2015). Stereo Matching Algorithm Based on 2D Delaunay Triangulation. Mathematical Problems in Eng ineering , 501, 137193. 23 Hasan, H. M. Image Based Vehicle Traffic Measurement. 24 Taneja, N. PERFORMANCE EVALUATION OF IMAGE SEGMENTATION TECHNIQUES USED FOR QUALITATIVE ANALYSIS OF MEMBRANE FILTER. 25 Mathur, A., & Mathur, R. (2013). Content Based Image Retrieval by Multi Features us ing Image Blocks. International Journal of Advanced Computer Research, 3(4), 251. 26 Pandey, A., Pant, D., & Gupta, K. K. (2013). A Novel Approach on Color Image Refocusing and Defocusing . International Journal of Computer Applications, 73(3), 13-17. 27 S le, I. (2014). The determination of the twist level of the Chenille yarn using novel image processing methods: Extraction of axial grey-level characteristic and multi-step gradient based thresholding . Dig ital Signal Processing , 29, 78-99. 28 Azzabi, T., Amor, S. B., & Nejim, S. (2014, November). Obstacle detection for Unmanned Surface Vehicle. In Electrical Sciences and Technolog ies in Maghreb (CISTEM), 2014 International Conference on (pp. 1-7). IEEE. 29 Zacharia, K., Elias , E. P., & Varghese, S. M. (2012). Personalised product design using virtual interactive techniques. arXiv preprint arXiv:1202.1808. 30 Kim, J. H., & Lattimer, B. Y. (2015). Real-time probabilis tic class ification of fire and smoke using thermal imagery for intelligent firefighting robot. Fire Safety Journal, 72, 40-49. 31 N ez, J. M. Edge detection for Very High Resolution Satellite Imagery based on Cellular Neural Network. Advances in Pattern Recognition, 55. 32 Capobianco, J., Pallone, G., & Daudet, L. (2012, October). Low Complexity Transient Detection in Audio Coding Using an Image Edge Detection Approach. In Audio Eng ineering Society Convention 133. Audio Eng ineering Society. 33 zt rk, S., & Akdemir, B. (2015). Comparison of Edge Detection Algorithms for Texture Analys is on Glass Production. Procedia-Social and Behavioral Sciences, 195, 2675-2682. 34 Ahmed, A. M., & Elramly, S. Hyperspectral Data Compression Based On Weighted Prediction. 35 Jayas, D. S. A. Manickavasagan, HN Al-Shekaili, G. Thomas, MS Rahman, N. Guizani &. 36 Khashu, S., Vijayanagar, S., Manikantan, K., & Ramachandran, S. (2014, February). Face Recognition using Dual Wavelet Transform and Filter-Transformed Flipping . In Electronics and Communication Systems (ICECS), 2014 International Conference on (pp. 1-7). IEEE. 37 Brown, R. C. (2014). IRIS: Intelligent Roadway Image Segmentation using an Adaptive Reg ion of Interest (Doctoral dissertation, Virg inia Polytechnic Institute and State Univers ity). 38 Huang , L., Zuo, X., Fang , Y., & Yu, X. A Segmentation Algorithm for Remote Sensing Imag ing Based on Edge and Heterogeneity of Objects . 39 Park, J., Kim, Y., & Kim, S. (2015). Landing Site Searching and Selection Algorithm Development Using Vis ion System and Its Application to Quadrotor. Control Systems Technology, IEEE Transactions on, 23(2), 488-503. 40 Sikchi, P., Beknalkar, N., & Rane, S. Real-Time Cartoonization Using Raspberry Pi. 41 Bachmakov, E., Molina, C., & Wynne, R. (2014, March). Image-based spectroscopy for environmental monitoring . In SPIE Smart Structures and Materials+ Nondestructive Evaluation and Health Monitoring (pp. 90620B-90620B). International Society for Optics and Photonics . 42 Kulyukin, V., & Zaman, T. (2014). Vis ion-Based Localization and Scanning of 1D UPC and EAN Barcodes with Relaxed Pitch, Roll, and Yaw Camera Alignment Constraints . International Journal of Image Processing (IJIP), 8(5), 355. 43 Sandhu, E. M. S., Mutneja, E. V., & Nishi, E. Image Edge Detection by Using Rule Based Fuzzy Class ifier. 44 Tarwani, K. M., & Bhoyar, K. K. Approaches to Gender Class ification using Facial Images. 45 Kuppili, S. K., & Prasad, P. M. K. (2015). Design of Area Optimized Sobel Edge Detection. In Computational Intelligence in Data Mining-Volume 2 (pp. 647-655). Springer India. 46 Singh, R. K., Shaw, D. K., & Alam, M. J. (2015). Experimental Studies of LSB Watermarking with Different Noise. Procedia Computer Science, 54, 612-620. 47 Xu, Y., Da-qiao, Z., Da-wei, D., Bo, W., & Chao-nan, T. (2014, July). A speed monitoring method in steel pipe of 3PE-coating process based on industrial Charge-coupled Device. In Control Conference (CCC), 2014 33rd Chinese (pp. 2908-2912). IEEE. 48 Yasiran, S. S., Jumaat, A. K., Malek, A. A., Hashim, F. H., Nasrir, N., Hassan, S. N. A. S., ... & Mahmud, R. (1987). Microcalcifications Segmentation using Three Edge Detection Techniques on Mammogram Images. 49 Roslan, N., Reba, M. N. M., Askari, M., & Halim, M. K. A. (2014, February). Linear and non-linear enhancement for sun g lint reduction in advanced very high resolution radiometer (AVHRR) image. In IOP Conference Series : Earth and Environmental Science (Vol. 18, No. 1, p. 012041). IOP Publishing . 50 Gupta, P. K. D., Pattnaik, S., & Nayak, M. (2014). Inter-level Spatial Cloud Compression Algorithm. Defence Science Journal, 64(6), 536-541. 51 Foster, R. (2015). A comparison of machine learning techniques for hand shape recogn",
"title": ""
},
{
"docid": "neg:1840150_11",
"text": "In this paper we describe and evaluate some recently innovated coupling metrics for object oriented OO design The Coupling Between Objects CBO metric of Chidamber and Kemerer C K are evaluated empirically using ve OO systems and compared with an alternative OO design metric called NAS which measures the Number of Associations between a class and its peers The NAS metric is directly collectible from design documents such as the Object Model of OMT Results from all systems studied indicate a strong relationship between CBO and NAS suggesting that they are not orthogonal We hypothesised that coupling would be related to understandability the number of errors and error density No relationships were found for any of the systems between class understandability and coupling However we did nd partial support for our hypothesis linking increased coupling to increased error density The work described in this paper is part of the Metrics for OO Programming Systems MOOPS project which aims are to evaluate existing OO metrics and to innovate and evaluate new OO analysis and design metrics aimed speci cally at the early stages of development",
"title": ""
},
{
"docid": "neg:1840150_12",
"text": "This paper presents the implementation of the interval type-2 to control the process of production of High-strength low-alloy (HSLA) steel in a secondary metallurgy process in a simply way. The proposal evaluate fuzzy techniques to ensure the accuracy of the model, the most important advantage is that the systems do not need pretreatment of the historical data, it is used as it is. The system is a multiple input single output (MISO) and the main goal of this paper is the proposal of a system that optimizes the resources: computational, time, among others.",
"title": ""
},
{
"docid": "neg:1840150_13",
"text": "This paper provides a critical overview of the theoretical, analytical, and practical questions most prevalent in the study of the structural and the sociolinguistic dimensions of code-switching (CS). In doing so, it reviews a range of empirical studies from around the world. The paper first looks at the linguistic research on the structural features of CS focusing in particular on the code-switching versus borrowing distinction, and the syntactic constraints governing its operation. It then critically reviews sociological, anthropological, and linguistic perspectives dominating the sociolinguistic research on CS over the past three decades. Major empirical studies on the discourse functions of CS are discussed, noting the similarities and differences between socially motivated CS and style-shifting. Finally, directions for future research on CS are discussed, giving particular emphasis to the methodological issue of its applicability to the analysis of bilingual classroom interaction.",
"title": ""
},
{
"docid": "neg:1840150_14",
"text": "The overall focus of this research is to demonstrate the savings potential generated by the integration of the design of strategic global supply chain networks with the determination of tactical production–distribution allocations and transfer prices. The logistics systems design problem is defined as follows: given a set of potential suppliers, potential manufacturing facilities, and distribution centers with multiple possible configurations, and customers with deterministic demands, determine the configuration of the production–distribution system and the transfer prices between various subsidiaries of the corporation such that seasonal customer demands and service requirements are met and the after tax profit of the corporation is maximized. The after tax profit is the difference between the sales revenue minus the total system cost and taxes. The total cost is defined as the sum of supply, production, transportation, inventory, and facility costs. Two models and their associated solution algorithms will be introduced. The savings opportunities created by designing the system with a methodology that integrates strategic and tactical decisions rather than in a hierarchical fashion are demonstrated with two case studies. The first model focuses on the setting of transfer prices in a global supply chain with the objective of maximizing the after tax profit of an international corporation. The constraints mandated by the national taxing authorities create a bilinear programming formulation. We will describe a very efficient heuristic iterative solution algorithm, which alternates between the optimization of the transfer prices and the material flows. Performance and bounds for the heuristic algorithms will be discussed. The second model focuses on the production and distribution allocation in a single country system, when the customers have seasonal demands. This model also needs to be solved as a subproblem in the heuristic solution of the global transfer price model. The research develops an integrated design methodology based on primal decomposition methods for the mixed integer programming formulation. The primal decomposition allows a natural split of the production and transportation decisions and the research identifies the necessary information flows between the subsystems. The primal decomposition method also allows a very efficient solution algorithm for this general class of large mixed integer programming models. Data requirements and solution times will be discussed for a real life case study in the packaging industry. 2002 Elsevier Science B.V. All rights reserved. European Journal of Operational Research 143 (2002) 1–18 www.elsevier.com/locate/dsw * Corresponding author. Tel.: +1-404-894-2317; fax: +1-404-894-2301. E-mail address: marc.goetschalckx@isye.gatech.edu (M. Goetschalckx). 0377-2217/02/$ see front matter 2002 Elsevier Science B.V. All rights reserved. PII: S0377-2217 (02 )00142-X",
"title": ""
},
{
"docid": "neg:1840150_15",
"text": "This paper presents a new method, Minimax Tree Optimization (MMTO), to learn a heuristic evaluation function of a practical alpha-beta search program. The evaluation function may be a linear or non-linear combination of weighted features, and the weights are the parameters to be optimized. To control the search results so that the move decisions agree with the game records of human experts, a well-modeled objective function to be minimized is designed. Moreover, a numerical iterative method is used to find local minima of the objective function, and more than forty million parameters are adjusted by using a small number of hyper parameters. This method was applied to shogi, a major variant of chess in which the evaluation function must handle a larger state space than in chess. Experimental results show that the large-scale optimization of the evaluation function improves the playing strength of shogi programs, and the new method performs significantly better than other methods. Implementation of the new method in our shogi program Bonanza made substantial contributions to the program’s first-place finish in the 2013 World Computer Shogi Championship. Additionally, we present preliminary evidence of broader applicability of our method to other two-player games such as chess.",
"title": ""
},
{
"docid": "neg:1840150_16",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840150_17",
"text": "Why is corruption—the misuse of public office for private gain— perceived to be more widespread in some countries than others? Different theories associate this with particular historical and cultural traditions, levels of economic development, political institutions, and government policies. This article analyzes several indexes of “perceived corruption” compiled from business risk surveys for the 1980s and 1990s. Six arguments find support. Countries with Protestant traditions, histories of British rule, more developed economies, and (probably) higher imports were less \"corrupt\". Federal states were more \"corrupt\". While the current degree of democracy was not significant, long exposure to democracy predicted lower corruption.",
"title": ""
},
{
"docid": "neg:1840150_18",
"text": "The paper introduces a complete offline programming toolbox for remote laser welding (RLW) which provides a semi-automated method for computing close-to-optimal robot programs. A workflow is proposed for the complete planning process, and new models and algorithms are presented for solving the optimization problems related to each step of the workflow: the sequencing of the welding tasks, path planning, workpiece placement, calculation of inverse kinematics and the robot trajectory, as well as for generating the robot program code. The paper summarizes the results of an industrial case study on the assembly of a car door using RLW technology, which illustrates the feasibility and the efficiency of the proposed approach.",
"title": ""
},
{
"docid": "neg:1840150_19",
"text": "This study was carried out in a Turkish university with 216 undergraduate students of computer technology as respondents. The study aimed to develop a scale (UECUBS) to determine the unethical computer use behavior. A factor analysis of the related items revealed that the factors were can be divided under five headings; intellectual property, social impact, safety and quality, net integrity and information integrity. 2005 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
1840151 | Combining monoSLAM with object recognition for scene augmentation using a wearable camera | [
{
"docid": "pos:1840151_0",
"text": "This paper presents a system that allows online building of 3D wireframe models through a combination of user interaction and automated methods from a handheld camera-mouse. Crucially, the model being built is used to concurrently compute camera pose, permitting extendable tracking while enabling the user to edit the model interactively. In contrast to other model building methods that are either off-line and/or automated but computationally intensive, the aim here is to have a system that has low computational requirements and that enables the user to define what is relevant (and what is not) at the time the model is being built. OutlinAR hardware is also developed which simply consists of the combination of a camera with a wide field of view lens and a wheeled computer mouse.",
"title": ""
}
] | [
{
"docid": "neg:1840151_0",
"text": "This paper deals with the kinematic and dynamic analyses of the Orthoglide 5-axis, a five-degree-of-freedom manipulator. It is derived from two manipulators: i) the Orthoglide 3-axis; a three dof translational manipulator and ii) the Agile eye; a parallel spherical wrist. First, the kinematic and dynamic models of the Orthoglide 5-axis are developed. The geometric and inertial parameters of the manipulator are determined by means of a CAD software. Then, the required motors performances are evaluated for some test trajectories. Finally, the motors are selected in the catalogue from the previous results.",
"title": ""
},
{
"docid": "neg:1840151_1",
"text": "Tracking multiple objects is a challenging task when objects move in groups and occlude each other. Existing methods have investigated the problems of group division and group energy-minimization; however, lacking overall objectgroup topology modeling limits their ability in handling complex object and group dynamics. Inspired with the social affinity property of moving objects, we propose a Graphical Social Topology (GST) model, which estimates the group dynamics by jointly modeling the group structure and the states of objects using a topological representation. With such topology representation, moving objects are not only assigned to groups, but also dynamically connected with each other, which enables in-group individuals to be correctly associated and the cohesion of each group to be precisely modeled. Using well-designed topology learning modules and topology training, we infer the birth/death and merging/splitting of dynamic groups. With the GST model, the proposed multi-object tracker can naturally facilitate the occlusion problem by treating the occluded object and other in-group members as a whole while leveraging overall state transition. Experiments on both RGB and RGB-D datasets confirm that the proposed multi-object tracker improves the state-of-the-arts especially in crowded scenes.",
"title": ""
},
{
"docid": "neg:1840151_2",
"text": "A single color image can contain many cues informative towards different aspects of local geometric structure. We approach the problem of monocular depth estimation by using a neural network to produce a mid-level representation that summarizes these cues. This network is trained to characterize local scene geometry by predicting, at every image location, depth derivatives of different orders, orientations and scales. However, instead of a single estimate for each derivative, the network outputs probability distributions that allow it to express confidence about some coefficients, and ambiguity about others. Scene depth is then estimated by harmonizing this overcomplete set of network predictions, using a globalization procedure that finds a single consistent depth map that best matches all the local derivative distributions. We demonstrate the efficacy of this approach through evaluation on the NYU v2 depth data set.",
"title": ""
},
{
"docid": "neg:1840151_3",
"text": "Morphological segmentation of words is a subproblem of many natural language tasks, including handling out-of-vocabulary (OOV) words in machine translation, more effective information retrieval, and computer assisted vocabulary learning. Previous work typically relies on extensive statistical and semantic analyses to induce legitimate stems and affixes. We introduce a new learning based method and a prototype implementation of a knowledge light system for learning to segment a given word into word parts, including prefixes, suffixes, stems, and even roots. The method is based on the Conditional Random Fields (CRF) model. Evaluation results show that our method with a small set of seed training data and readily available resources can produce fine-grained morphological segmentation results that rival previous work and systems.",
"title": ""
},
{
"docid": "neg:1840151_4",
"text": "In this paper, we propose a Switchable Deep Network (SDN) for pedestrian detection. The SDN automatically learns hierarchical features, salience maps, and mixture representations of different body parts. Pedestrian detection faces the challenges of background clutter and large variations of pedestrian appearance due to pose and viewpoint changes and other factors. One of our key contributions is to propose a Switchable Restricted Boltzmann Machine (SRBM) to explicitly model the complex mixture of visual variations at multiple levels. At the feature levels, it automatically estimates saliency maps for each test sample in order to separate background clutters from discriminative regions for pedestrian detection. At the part and body levels, it is able to infer the most appropriate template for the mixture models of each part and the whole body. We have devised a new generative algorithm to effectively pretrain the SDN and then fine-tune it with back-propagation. Our approach is evaluated on the Caltech and ETH datasets and achieves the state-of-the-art detection performance.",
"title": ""
},
{
"docid": "neg:1840151_5",
"text": "Uploading data streams to a resource-rich cloud server for inner product evaluation, an essential building block in many popular stream applications (e.g., statistical monitoring), is appealing to many companies and individuals. On the other hand, verifying the result of the remote computation plays a crucial role in addressing the issue of trust. Since the outsourced data collection likely comes from multiple data sources, it is desired for the system to be able to pinpoint the originator of errors by allotting each data source a unique secret key, which requires the inner product verification to be performed under any two parties’ different keys. However, the present solutions either depend on a single key assumption or powerful yet practically-inefficient fully homomorphic cryptosystems. In this paper, we focus on the more challenging multi-key scenario where data streams are uploaded by multiple data sources with distinct keys. We first present a novel homomorphic verifiable tag technique to publicly verify the outsourced inner product computation on the dynamic data streams, and then extend it to support the verification of matrix product computation. We prove the security of our scheme in the random oracle model. Moreover, the experimental result also shows the practicability of our design.",
"title": ""
},
{
"docid": "neg:1840151_6",
"text": "The application of neuroscience to marketing, and in particular to the consumer psychology of brands, has gained popularity over the past decade in the academic and the corporate world. In this paper, we provide an overview of the current and previous research in this area and explainwhy researchers and practitioners alike are excited about applying neuroscience to the consumer psychology of brands. We identify critical issues of past research and discuss how to address these issues in future research. We conclude with our vision of the future potential of research at the intersection of neuroscience and consumer psychology. © 2011 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840151_7",
"text": "A general principle of sensory processing is that neurons adapt to sustained stimuli by reducing their response over time. Most of our knowledge on adaptation in single cells is based on experiments in anesthetized animals. How responses adapt in awake animals, when stimuli may be behaviorally relevant or not, remains unclear. Here we show that contrast adaptation in mouse primary visual cortex depends on the behavioral relevance of the stimulus. Cells that adapted to contrast under anesthesia maintained or even increased their activity in awake naïve mice. When engaged in a visually guided task, contrast adaptation re-occurred for stimuli that were irrelevant for solving the task. However, contrast adaptation was reversed when stimuli acquired behavioral relevance. Regulation of cortical adaptation by task demand may allow dynamic control of sensory-evoked signal flow in the neocortex.",
"title": ""
},
{
"docid": "neg:1840151_8",
"text": "We propose that a robot speaks a Hanamogera (a semantic-free speech) when the robot speaks with a person. Hanamogera is semantic-free speech and the sound of the speech is a sound of the words which consists of phonogram characters. The consisted characters can be changed freely because the Hanamogera speech does not have to have any meaning. Each sound of characters of a Hanamogera is thought to have an impression according to the contained consonant/vowel in the characters. The Hanamogera is expected to make a listener feel that the talking with a robot which speaks a Hanamogera is fun because of a sound of the Hanamogera. We conducted an experiment of talking with a NAO and an experiment of evaluating to Hanamogera speeches. The results of the experiment showed that a talking with a Hanamogera speech robot was better fun than a talking with a nodding robot.",
"title": ""
},
{
"docid": "neg:1840151_9",
"text": "Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the “How May I Help You” spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.",
"title": ""
},
{
"docid": "neg:1840151_10",
"text": "Dr. Stephanie L. Cincotta (Psychiatry): A 35-year-old woman was seen in the emergency department of this hospital because of a pruritic rash. The patient had a history of hepatitis C virus (HCV) infection, acne, depression, and drug dependency. She had been in her usual health until 2 weeks before this presentation, when insomnia developed, which she attributed to her loss of a prescription for zolpidem. During the 10 days before this presentation, she reported seeing white “granular balls,” which she thought were mites or larvae, emerging from and crawling on her skin, sheets, and clothing and in her feces, apartment, and car, as well as having an associated pruritic rash. She was seen by her physician, who referred her to a dermatologist for consideration of other possible causes of the persistent rash, such as porphyria cutanea tarda, which is associated with HCV infection. Three days before this presentation, the patient ran out of clonazepam (after an undefined period during which she reportedly took more than the prescribed dose) and had increasing anxiety and insomnia. The same day, she reported seeing “bugs” on her 15-month-old son that were emerging from his scalp and were present on his skin and in his diaper and sputum. The patient scratched her skin and her child’s skin to remove the offending agents. The day before this presentation, she called emergency medical services and she and her child were transported by ambulance to the emergency department of another hospital. A diagnosis of possible cheyletiellosis was made. She was advised to use selenium sulfide shampoo and to follow up with her physician; the patient returned home with her child. On the morning of admission, while bathing her child, she noted that his scalp was turning red and he was crying. She came with her son to the emergency department of this hospital. The patient reported the presence of bugs on her skin, which she attempted to point out to examiners. She acknowledged a habit of picking at her skin since adolescence, which she said had a calming effect. Fourteen months earlier, shortly after the birth of her son, worsening acne developed that did not respond to treatment with topical antimicrobial agents and tretinoin. Four months later, a facial abscess due From the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Massachusetts General Hospital, and the Departments of Psychiatry (S.R.B., N.K.) and Dermatology (D.K.), Harvard Medi‐ cal School — both in Boston.",
"title": ""
},
{
"docid": "neg:1840151_11",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "neg:1840151_12",
"text": "In the past half-decade, Amazon Mechanical Turk has radically changed the way many scholars do research. The availability of a massive, distributed, anonymous crowd of individuals willing to perform general human-intelligence micro-tasks for micro-payments is a valuable resource for researchers and practitioners. This paper addresses the challenges of obtaining quality annotations for subjective judgment oriented tasks of varying difficulty. We design and conduct a large, controlled experiment (N=68,000) to measure the efficacy of selected strategies for obtaining high quality data annotations from non-experts. Our results point to the advantages of person-oriented strategies over process-oriented strategies. Specifically, we find that screening workers for requisite cognitive aptitudes and providing training in qualitative coding techniques is quite effective, significantly outperforming control and baseline conditions. Interestingly, such strategies can improve coder annotation accuracy above and beyond common benchmark strategies such as Bayesian Truth Serum (BTS).",
"title": ""
},
{
"docid": "neg:1840151_13",
"text": "A millimeter-wave CMOS on-chip stacked Marchand balun is presented in this paper. The balun is fabricated using a top pad metal layer as the single-ended port and is stacked above two metal conductors at the next highest metal layer in order to achieve sufficient coupling to function as the differential ports. Strip metal shields are placed underneath the structure to reduce substrate losses. An amplitude imbalance of 0.5 dB is measured with attenuations below 6.5 dB at the differential output ports at 30 GHz. The corresponding phase imbalance is below 5 degrees. The area occupied is 229μm × 229μm.",
"title": ""
},
{
"docid": "neg:1840151_14",
"text": "A number of compilers exploit the following strategy: translate a term to continuation-passing style (CPS) and optimize the resulting term using a sequence of reductions. Recent work suggests that an alternative strategy is superior: optimize directly in an extended source calculus. We suggest that the appropriate relation between the source and target calculi may be captured by a special case of a Galois connection known as a reflection. Previous work has focused on the weaker notion of an equational correspondence, which is based on equality rather than reduction. We show that Moggi's monad translation and Plotkin's CPS translation can both be regarded as reflections, and thereby strengthen a number of results in the literature.",
"title": ""
},
{
"docid": "neg:1840151_15",
"text": "Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results.",
"title": ""
},
{
"docid": "neg:1840151_16",
"text": "This research is a partial test of Park et al.’s (2008) model to assess the impact of flow and brand equity in 3D virtual worlds. It draws on flow theory as its main theoretical foundation to understand and empirically assess the impact of flow on brand equity and behavioral intention in 3D virtual worlds. The findings suggest that the balance of skills and challenges in 3D virtual worlds influences users’ flow experience, which in turn influences brand equity. Brand equity then increases behavioral intention. The authors also found that the impact of flow on behavioral intention in 3D virtual worlds is indirect because the relationship between them is mediated by brand equity. This research highlights the importance of balancing the challenges posed by 3D virtual world branding sites with the users’ skills to maximize their flow experience and brand equity to increase the behavioral intention associated with the brand.",
"title": ""
},
{
"docid": "neg:1840151_17",
"text": "This paper describes a computer vision based system for real-time robust traffic sign detection, tracking, and recognition. Such a framework is of major interest for driver assistance in an intelligent automotive cockpit environment. The proposed approach consists of two components. First, signs are detected using a set of Haar wavelet features obtained from AdaBoost training. Compared to previously published approaches, our solution offers a generic, joint modeling of color and shape information without the need of tuning free parameters. Once detected, objects are efficiently tracked within a temporal information propagation framework. Second, classification is performed using Bayesian generative modeling. Making use of the tracking information, hypotheses are fused over multiple frames. Experiments show high detection and recognition accuracy and a frame rate of approximately 10 frames per second on a standard PC.",
"title": ""
},
{
"docid": "neg:1840151_18",
"text": "This paper presents a novel representation for three-dimensional objects in terms of affine-invariant image patches and their spatial relationships. Multi-view co nstraints associated with groups of patches are combined wit h a normalized representation of their appearance to guide matching and reconstruction, allowing the acquisition of true three-dimensional affine and Euclidean models from multiple images and their recognition in a single photograp h taken from an arbitrary viewpoint. The proposed approach does not require a separate segmentation stage and is applicable to cluttered scenes. Preliminary modeling and recognition results are presented.",
"title": ""
},
{
"docid": "neg:1840151_19",
"text": "The named concepts and compositional operators present in natural language provide a rich source of information about the kinds of abstractions humans use to navigate the world. Can this linguistic background knowledge improve the generality and efficiency of learned classifiers and control policies? This paper aims to show that using the space of natural language strings as a parameter space is an effective way to capture natural task structure. In a pretraining phase, we learn a language interpretation model that transforms inputs (e.g. images) into outputs (e.g. labels) given natural language descriptions. To learn a new concept (e.g. a classifier), we search directly in the space of descriptions to minimize the interpreter’s loss on training examples. Crucially, our models do not require language data to learn these concepts: language is used only in pretraining to impose structure on subsequent learning. Results on image classification, text editing, and reinforcement learning show that, in all settings, models with a linguistic parameterization outperform those without.1",
"title": ""
}
] |
1840152 | HAGP: A Hub-Centric Asynchronous Graph Processing Framework for Scale-Free Graph | [
{
"docid": "pos:1840152_0",
"text": "At extreme scale, irregularities in the structure of scale-free graphs such as social network graphs limit our ability to analyze these important and growing datasets. A key challenge is the presence of high-degree vertices (hubs), that leads to parallel workload and storage imbalances. The imbalances occur because existing partitioning techniques are not able to effectively partition high-degree vertices.\n We present techniques to distribute storage, computation, and communication of hubs for extreme scale graphs in distributed memory supercomputers. To balance the hub processing workload, we distribute hub data structures and related computation among a set of delegates. The delegates coordinate using highly optimized, yet portable, asynchronous broadcast and reduction operations. We demonstrate scalability of our new algorithmic technique using Breadth-First Search (BFS), Single Source Shortest Path (SSSP), K-Core Decomposition, and PageRank on synthetically generated scale-free graphs. Our results show excellent scalability on large scale-free graphs up to 131K cores of the IBM BG/P, and outperform the best known Graph500 performance on BG/P Intrepid by 15%.",
"title": ""
},
{
"docid": "pos:1840152_1",
"text": "Myriad of graph-based algorithms in machine learning and data mining require parsing relational data iteratively. These algorithms are implemented in a large-scale distributed environment to scale to massive data sets. To accelerate these large-scale graph-based iterative computations, we propose delta-based accumulative iterative computation (DAIC). Different from traditional iterative computations, which iteratively update the result based on the result from the previous iteration, DAIC updates the result by accumulating the “changes” between iterations. By DAIC, we can process only the “changes” to avoid the negligible updates. Furthermore, we can perform DAIC asynchronously to bypass the high-cost synchronous barriers in heterogeneous distributed environments. Based on the DAIC model, we design and implement an asynchronous graph processing framework, Maiter. We evaluate Maiter on local cluster as well as on Amazon EC2 Cloud. The results show that Maiter achieves as much as 60 × speedup over Hadoop and outperforms other state-of-the-art frameworks.",
"title": ""
}
] | [
{
"docid": "neg:1840152_0",
"text": "We describe a learning-based method for recovering 3D human body pose from single images and monocular image sequences. Our approach requires neither an explicit body model nor prior labeling of body parts in the image. Instead, it recovers pose by direct nonlinear regression against shape descriptor vectors extracted automatically from image silhouettes. For robustness against local silhouette segmentation errors, silhouette shape is encoded by histogram-of-shape-contexts descriptors. We evaluate several different regression methods: ridge regression, relevance vector machine (RVM) regression, and support vector machine (SVM) regression over both linear and kernel bases. The RVMs provide much sparser regressors without compromising performance, and kernel bases give a small but worthwhile improvement in performance. The loss of depth and limb labeling information often makes the recovery of 3D pose from single silhouettes ambiguous. To handle this, the method is embedded in a novel regressive tracking framework, using dynamics from the previous state estimate together with a learned regression value to disambiguate the pose. We show that the resulting system tracks long sequences stably. For realism and good generalization over a wide range of viewpoints, we train the regressors on images resynthesized from real human motion capture data. The method is demonstrated for several representations of full body pose, both quantitatively on independent but similar test data and qualitatively on real image sequences. Mean angular errors of 4-6/spl deg/ are obtained for a variety of walking motions.",
"title": ""
},
{
"docid": "neg:1840152_1",
"text": "Detection of image forgery is an important part of digital forensics and has attracted a lot of attention in the past few years. Previous research has examined residual pattern noise, wavelet transform and statistics, image pixel value histogram and other features of images to authenticate the primordial nature. With the development of neural network technologies, some effort has recently applied convolutional neural networks to detecting image forgery to achieve high-level image representation. This paper proposes to build a convolutional neural network different from the related work in which we try to understand extracted features from each convolutional layer and detect different types of image tampering through automatic feature learning. The proposed network involves five convolutional layers, two full-connected layers and a Softmax classifier. Our experiment has utilized CASIA v1.0, a public image set that contains authentic images and splicing images, and its further reformed versions containing retouching images and re-compressing images as the training data. Experimental results can clearly demonstrate the effectiveness and adaptability of the proposed network.",
"title": ""
},
{
"docid": "neg:1840152_2",
"text": "Cloud radio access network (C-RAN) refers to the visualization of base station functionalities by means of cloud computing. This results in a novel cellular architecture in which low-cost wireless access points, known as radio units or remote radio heads, are centrally managed by a reconfigurable centralized \"cloud\", or central, unit. C-RAN allows operators to reduce the capital and operating expenses needed to deploy and maintain dense heterogeneous networks. This critical advantage, along with spectral efficiency, statistical multiplexing and load balancing gains, make C-RAN well positioned to be one of the key technologies in the development of 5G systems. In this paper, a succinct overview is presented regarding the state of the art on the research on C-RAN with emphasis on fronthaul compression, baseband processing, medium access control, resource allocation, system-level considerations and standardization efforts.",
"title": ""
},
{
"docid": "neg:1840152_3",
"text": "Moringa oleifera Lam. (family; Moringaceae), commonly known as drumstick, have been used for centuries as a part of the Ayurvedic system for several diseases without having any scientific data. Demineralized water was used to prepare aqueous extract by maceration for 24 h and complete metabolic profiling was performed using GC-MS and HPLC. Hypoglycemic properties of extract have been tested on carbohydrate digesting enzyme activity, yeast cell uptake, muscle glucose uptake, and intestinal glucose absorption. Type 2 diabetes was induced by feeding high-fat diet (HFD) for 8 weeks and a single injection of streptozotocin (STZ, 45 mg/kg body weight, intraperitoneally) was used for the induction of type 1 diabetes. Aqueous extract of M. oleifera leaf was given orally at a dose of 100 mg/kg to STZ-induced rats and 200 mg/kg in HFD mice for 3 weeks after diabetes induction. Aqueous extract remarkably inhibited the activity of α-amylase and α-glucosidase and it displayed improved antioxidant capacity, glucose tolerance and rate of glucose uptake in yeast cell. In STZ-induced diabetic rats, it produces a maximum fall up to 47.86% in acute effect whereas, in chronic effect, it was 44.5% as compared to control. The fasting blood glucose, lipid profile, liver marker enzyme level were significantly (p < 0.05) restored in both HFD and STZ experimental model. Multivariate principal component analysis on polar and lipophilic metabolites revealed clear distinctions in the metabolite pattern in extract and in blood after its oral administration. Thus, the aqueous extract can be used as phytopharmaceuticals for the management of diabetes by using as adjuvants or alone.",
"title": ""
},
{
"docid": "neg:1840152_4",
"text": "I use the example of the 2000 US Presidential election to show that political controversies with technical underpinnings are not resolved by technical means. Then, drawing from examples such as climate change, genetically modified foods, and nuclear waste disposal, I explore the idea that scientific inquiry is inherently and unavoidably subject to becoming politicized in environmental controversies. I discuss three reasons for this. First, science supplies contesting parties with their own bodies of relevant, legitimated facts about nature, chosen in part because they help make sense of, and are made sensible by, particular interests and normative frameworks. Second, competing disciplinary approaches to understanding the scientific bases of an environmental controversy may be causally tied to competing value-based political or ethical positions. The necessity of looking at nature through a variety of disciplinary lenses brings with it a variety of normative lenses, as well. Third, it follows from the foregoing that scientific uncertainty, which so often occupies a central place in environmental controversies, can be understood not as a lack of scientific understanding but as the lack of coherence among competing scientific understandings, amplified by the various political, cultural, and institutional contexts within which science is carried out. In light of these observations, I briefly explore the problem of why some types of political controversies become “scientized” and others do not, and conclude that the value bases of disputes underlying environmental controversies must be fully articulated and adjudicated through political means before science can play an effective role in resolving environmental problems. © 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840152_5",
"text": "Several studies have used the Edinburgh Postnatal Depression Scale (EPDS), developed to screen new mothers, also for new fathers. This study aimed to further contribute to this knowledge by comparing assessment of possible depression in fathers and associated demographic factors by the EPDS and the Gotland Male Depression Scale (GMDS), developed for \"male\" depression screening. The study compared EPDS score ≥10 and ≥12, corresponding to minor and major depression, respectively, in relation to GMDS score ≥13. At 3-6 months after child birth, a questionnaire was sent to 8,011 fathers of whom 3,656 (46%) responded. The detection of possibly depressed fathers by EPDS was 8.1% at score ≥12, comparable to the 8.6% detected by the GMDS. At score ≥10, the proportion detected by EPDS increased to 13.3%. Associations with possible risk factors were analyzed for fathers detected by one or both scales. A low income was associated with depression in all groups. Fathers detected by EPDS alone were at higher risk if they had three or more children, or lower education. Fathers detected by EPDS alone at score ≥10, or by both scales at EPDS score ≥12, more often were born in a foreign country. Seemingly, the EPDS and the GMDS are associated with different demographic risk factors. The EPDS score appears critical since 5% of possibly depressed fathers are excluded at EPDS cutoff 12. These results suggest that neither scale alone is sufficient for depression screening in new fathers, and that the decision of EPDS cutoff is crucial.",
"title": ""
},
{
"docid": "neg:1840152_6",
"text": "In this paper, we propose a new algorithm to compute intrinsic means of organ shapes from 3D medical images. More specifically, we explore the feasibility of Karcher means in the framework of the large deformations by diffeomorphisms (LDDMM). This setting preserves the topology of the averaged shapes and has interesting properties to quantitatively describe their anatomical variability. Estimating Karcher means requires to perform multiple registrations between the averaged template image and the set of reference 3D images. Here, we use a recent algorithm based on an optimal control method to satisfy the geodesicity of the deformations at any step of each registration. We also combine this algorithm with organ specific metrics. We demonstrate the efficiency of our methodology with experimental results on different groups of anatomical 3D images. We also extensively discuss the convergence of our method and the bias due to the initial guess. A direct perspective of this work is the computation of 3D+time atlases.",
"title": ""
},
{
"docid": "neg:1840152_7",
"text": "Robots are being deployed in an increasing variety of environments for longer periods of time. As the number of robots grows, they will increasingly need to interact with other robots. Additionally, the number of companies and research laboratories producing these robots is increasing, leading to the situation where these robots may not share a common communication or coordination protocol. While standards for coordination and communication may be created, we expect that robots will need to additionally reason intelligently about their teammates with limited information. This problem motivates the area of ad hoc teamwork in which an agent may potentially cooperate with a variety of teammates in order to achieve a shared goal. This article focuses on a limited version of the ad hoc teamwork problem in which an agent knows the environmental dynamics and has had past experiences with other teammates, though these experiences may not be representative of the current teammates. To tackle this problem, this article introduces a new general-purpose algorithm, PLASTIC, that reuses knowledge learned from previous teammates or provided by experts to quickly adapt to new teammates. This algorithm is instantiated in two forms: 1) PLASTIC–Model – which builds models of previous teammates’ behaviors and plans behaviors online using these models and 2) PLASTIC–Policy – which learns policies for cooperating with previous teammates and selects among these policies online. We evaluate PLASTIC on two benchmark tasks: the pursuit domain and robot soccer in the RoboCup 2D simulation domain. Recognizing that a key requirement of ad hoc teamwork is adaptability to previously unseen agents, the tests use more than 40 previously unknown teams on the first task and 7 previously unknown teams on the second. While PLASTIC assumes that there is some degree of similarity between the current and past teammates’ behaviors, no steps are taken in the experimental setup to make sure this assumption holds. The teammates ✩This article contains material from 4 prior conference papers [11–14]. Email addresses: sam@cogitai.com (Samuel Barrett), rosenfa@jct.ac.il (Avi Rosenfeld), sarit@cs.biu.ac.il (Sarit Kraus), pstone@cs.utexas.edu (Peter Stone) 1This work was performed while Samuel Barrett was a graduate student at the University of Texas at Austin. 2Corresponding author. Preprint submitted to Elsevier October 30, 2016 To appear in http://dx.doi.org/10.1016/j.artint.2016.10.005 Artificial Intelligence (AIJ)",
"title": ""
},
{
"docid": "neg:1840152_8",
"text": "As a fundamental preprocessing of various multimedia applications, object proposal aims to detect the candidate windows possibly containing arbitrary objects in images with two typical strategies, window scoring and grouping. In this paper, we first analyze the feasibility of improving object proposal performance by integrating window scoring and grouping strategies. Then, we propose a novel object proposal method for RGB-D images, named elastic edge boxes. The initial bounding boxes of candidate object regions are efficiently generated by edge boxes, and further adjusted by grouping the super-pixels within elastic range to obtain more accurate candidate windows. To validate the proposed method, we construct the largest RGB-D image data set NJU1800 for object proposal with balanced object number distribution. The experimental results show that our method can effectively and efficiently generate the candidate windows of object regions and it outperforms the state-of-the-art methods considering both accuracy and efficiency.",
"title": ""
},
{
"docid": "neg:1840152_9",
"text": "Many studies suggest using coverage concepts, such as branch coverage, as the starting point of testing, while others as the most prominent test quality indicator. Yet the relationship between coverage and fault-revelation remains unknown, yielding uncertainty and controversy. Most previous studies rely on the Clean Program Assumption, that a test suite will obtain similar coverage for both faulty and fixed ('clean') program versions. This assumption may appear intuitive, especially for bugs that denote small semantic deviations. However, we present evidence that the Clean Program Assumption does not always hold, thereby raising a critical threat to the validity of previous results. We then conducted a study using a robust experimental methodology that avoids this threat to validity, from which our primary finding is that strong mutation testing has the highest fault revelation of four widely-used criteria. Our findings also revealed that fault revelation starts to increase significantly only once relatively high levels of coverage are attained.",
"title": ""
},
{
"docid": "neg:1840152_10",
"text": "Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models combining the strengths of both models. Our contribution is to train such hybrid models using an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. In contrast, prior work on such hybrid models needed to limit the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our approach results in models with meaningful latent variable representations, and which rely on powerful autoregressive decoders to model image details. Our model generates qualitatively convincing samples, and yields stateof-the-art quantitative results.",
"title": ""
},
{
"docid": "neg:1840152_11",
"text": "BACKGROUND\nMotivation in learning behaviour and education is well-researched in general education, but less in medical education.\n\n\nAIM\nTo answer two research questions, 'How has the literature studied motivation as either an independent or dependent variable? How is motivation useful in predicting and understanding processes and outcomes in medical education?' in the light of the Self-determination Theory (SDT) of motivation.\n\n\nMETHODS\nA literature search performed using the PubMed, PsycINFO and ERIC databases resulted in 460 articles. The inclusion criteria were empirical research, specific measurement of motivation and qualitative research studies which had well-designed methodology. Only studies related to medical students/school were included.\n\n\nRESULTS\nFindings of 56 articles were included in the review. Motivation as an independent variable appears to affect learning and study behaviour, academic performance, choice of medicine and specialty within medicine and intention to continue medical study. Motivation as a dependent variable appears to be affected by age, gender, ethnicity, socioeconomic status, personality, year of medical curriculum and teacher and peer support, all of which cannot be manipulated by medical educators. Motivation is also affected by factors that can be influenced, among which are, autonomy, competence and relatedness, which have been described as the basic psychological needs important for intrinsic motivation according to SDT.\n\n\nCONCLUSION\nMotivation is an independent variable in medical education influencing important outcomes and is also a dependent variable influenced by autonomy, competence and relatedness. This review finds some evidence in support of the validity of SDT in medical education.",
"title": ""
},
{
"docid": "neg:1840152_12",
"text": "We propose MRU (Multi-Range Reasoning Units), a new fast compositional encoder for machine comprehension (MC). Our proposed MRU encoders are characterized by multi-ranged gating, executing a series of parameterized contractand-expand layers for learning gating vectors that benefit from long and short-term dependencies. The aims of our approach are as follows: (1) learning representations that are concurrently aware of long and short-term context, (2) modeling relationships between intra-document blocks and (3) fast and efficient sequence encoding. We show that our proposed encoder demonstrates promising results both as a standalone encoder and as well as a complementary building block. We conduct extensive experiments on three challenging MC datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic Fusion Networks) by 1.5% − 6% without using any recurrent or convolution layers. Similarly, we achieve competitive performance relative to AMANDA [17] on the SearchQA benchmark and BiDAF [23] on the NarrativeQA benchmark without using any LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM architectures further improves performance, achieving state-of-the-art results.",
"title": ""
},
{
"docid": "neg:1840152_13",
"text": "Intelligent vehicles have increased their capabilities for highly and, even fully, automated driving under controlled environments. Scene information is received using onboard sensors and communication network systems, i.e., infrastructure and other vehicles. Considering the available information, different motion planning and control techniques have been implemented to autonomously driving on complex environments. The main goal is focused on executing strategies to improve safety, comfort, and energy optimization. However, research challenges such as navigation in urban dynamic environments with obstacle avoidance capabilities, i.e., vulnerable road users (VRU) and vehicles, and cooperative maneuvers among automated and semi-automated vehicles still need further efforts for a real environment implementation. This paper presents a review of motion planning techniques implemented in the intelligent vehicles literature. A description of the technique used by research teams, their contributions in motion planning, and a comparison among these techniques is also presented. Relevant works in the overtaking and obstacle avoidance maneuvers are presented, allowing the understanding of the gaps and challenges to be addressed in the next years. Finally, an overview of future research direction and applications is given.",
"title": ""
},
{
"docid": "neg:1840152_14",
"text": "The goal of leading indicators for safety is to identify the potential for an accident before it occurs. Past efforts have focused on identifying general leading indicators, such as maintenance backlog, that apply widely in an industry or even across industries. Other recommendations produce more system-specific leading indicators, but start from system hazard analysis and thus are limited by the causes considered by the traditional hazard analysis techniques. Most rely on quantitative metrics, often based on probabilistic risk assessments. This paper describes a new and different approach to identifying system-specific leading indicators and provides guidance in designing a risk management structure to generate, monitor and use the results. The approach is based on the STAMP (SystemTheoretic Accident Model and Processes) model of accident causation and tools that have been designed to build on that model. STAMP extends current accident causality to include more complex causes than simply component failures and chains of failure events or deviations from operational expectations. It incorporates basic principles of systems thinking and is based on systems theory rather than traditional reliability theory.",
"title": ""
},
{
"docid": "neg:1840152_15",
"text": "Financial time sequence analysis has been a popular research topic in the field of finance, data science and machine learning. It is a highly challenging due to the extreme complexity within the sequences. Mostly existing models are failed to capture its intrinsic information, factor and tendency. To improve the previous approaches, in this paper, we propose a Hidden Markov Model (HMMs) based approach to analyze the financial time sequence. The fluctuation of financial time sequence was predicted through introducing a dual-state HMMs. Dual-state HMMs models the sequence and produces the features which will be delivered to SVMs for prediction. Note that we cast a financial time sequence prediction problem to a classification problem. To evaluate the proposed approach, we use Shanghai Composite Index as the dataset for empirically experiments. The dataset was collected from 550 consecutive trading days, and is randomly split to the training set and test set. The extensively experimental results show that: when analyzing financial time sequence, the mean-square error calculated with HMMs was obviously smaller error than the compared GARCH approach. Therefore, when using HMM to predict the fluctuation of financial time sequence, it achieves higher accuracy and exhibits several attractive advantageous over GARCH approach.",
"title": ""
},
{
"docid": "neg:1840152_16",
"text": "Component-based software engineering has had great impact in the desktop and server domain and is spreading to other domains as well, such as embedded systems. Agile software development is another approach which has gained much attention in recent years, mainly for smaller-scale production of less critical systems. Both of them promise to increase system quality, development speed and flexibility, but so far little has been published on the combination of the two approaches. This paper presents a comprehensive analysis of the applicability of the agile approach in the development processes of 1) COTS components and 2) COTS-based systems. The study method is a systematic theoretical examination and comparison of the fundamental concepts and characteristics of these approaches. The contributions are: first, an enumeration of identified contradictions between the approaches, and suggestions how to bridge these incompatibilities to some extent. Second, the paper provides some more general comments, considerations, and application guidelines concerning the introduction of agile principles into the development of COTS components or COTS-based systems. This study thus forms a framework which will guide further empirical studies.",
"title": ""
},
{
"docid": "neg:1840152_17",
"text": "Upper-extremity venous thrombosis often presents as unilateral arm swelling. The differential diagnosis includes lesions compressing the veins and causing a functional venous obstruction, venous stenosis, an infection causing edema, obstruction of previously functioning lymphatics, or the absence of sufficient lymphatic channels to ensure effective drainage. The following recommendations are made with the understanding that venous disease, specifically venous thrombosis, is the primary diagnosis to be excluded or confirmed in a patient presenting with unilateral upper-extremity swelling. Contrast venography remains the best reference-standard diagnostic test for suspected upper-extremity acute venous thrombosis and may be needed whenever other noninvasive strategies fail to adequately image the upper-extremity veins. Duplex, color flow, and compression ultrasound have also established a clear role in evaluation of the more peripheral veins that are accessible to sonography. Gadolinium contrast-enhanced MRI is routinely used to evaluate the status of the central veins. Delayed CT venography can often be used to confirm or exclude more central vein venous thrombi, although substantial contrast loads are required. The ACR Appropriateness Criteria(®) are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.",
"title": ""
},
{
"docid": "neg:1840152_18",
"text": "The spiral antenna is a well known kind of wideband antenna. The challenges to improve its design are numerous, such as creating a compact wideband matched feeding or controlling the radiation pattern. Here we propose a self matched and compact slot spiral antenna providing a unidirectional pattern.",
"title": ""
}
] |
1840153 | Eclectic domain mixing for effective adaptation in action spaces | [
{
"docid": "pos:1840153_0",
"text": "Article history: Available online 8 January 2014",
"title": ""
},
{
"docid": "pos:1840153_1",
"text": "Domain-invariant representations are key to addressing the domain shift problem where the training and test examples follow different distributions. Existing techniques that have attempted to match the distributions of the source and target domains typically compare these distributions in the original feature space. This space, however, may not be directly suitable for such a comparison, since some of the features may have been distorted by the domain shift, or may be domain specific. In this paper, we introduce a Domain Invariant Projection approach: An unsupervised domain adaptation method that overcomes this issue by extracting the information that is invariant across the source and target domains. More specifically, we learn a projection of the data to a low-dimensional latent space where the distance between the empirical distributions of the source and target examples is minimized. We demonstrate the effectiveness of our approach on the task of visual object recognition and show that it outperforms state-of-the-art methods on a standard domain adaptation benchmark dataset.",
"title": ""
}
] | [
{
"docid": "neg:1840153_0",
"text": "Codependency has been defined as an extreme focus on relationships, caused by a stressful family background (J. L. Fischer, L. Spann, & D. W. Crawford, 1991). In this study the authors assessed the relationship of the Spann-Fischer Codependency Scale (J. L. Fischer et al., 1991) and the Potter-Efron Codependency Assessment (L. A. Potter-Efron & P. S. Potter-Efron, 1989) with self-reported chronic family stress and family background. Students (N = 257) completed 2 existing self-report codependency measures and provided family background information. Results indicated that women had higher codependency scores than men on the Spann-Fischer scale. Students with a history of chronic family stress (with an alcoholic, mentally ill, or physically ill parent) had significantly higher codependency scores on both scales. The findings suggest that other types of family stressors, not solely alcoholism, may be predictors of codependency.",
"title": ""
},
{
"docid": "neg:1840153_1",
"text": "Multi-hop reading comprehension focuses on one type of factoid question, where a system needs to properly integrate multiple pieces of evidence to correctly answer a question. Previous work approximates global evidence with local coreference information, encoding coreference chains with DAG-styled GRU layers within a gated-attention reader. However, coreference is limited in providing information for rich inference. We introduce a new method for better connecting global evidence, which forms more complex graphs compared to DAGs. To perform evidence integration on our graphs, we investigate two recent graph neural networks, namely graph convolutional network (GCN) and graph recurrent network (GRN). Experiments on two standard datasets show that richer global information leads to better answers. Our method performs better than all published results on these datasets.",
"title": ""
},
{
"docid": "neg:1840153_2",
"text": "A hierarchical model of approach and avoidance achievement motivation was proposed and tested in a college classroom. Mastery, performance-approach, and performance-avoidance goals were assessed and their antecedents and consequences examined. Results indicated that mastery goals were grounded in achievement motivation and high competence expectancies; performance-avoidance goals, in fear of failure and low competence expectancies; and performance-approach goals, in ach.ievement motivation, fear of failure, and high competence expectancies. Mastery goals facilitated intrinsic motivation, performance-approach goals enhanced graded performance, and performanceavoidance goals proved inimical to both intrinsic motivation and graded performance. The proposed model represents an integration of classic and contemporary approaches to the study of achievement motivation.",
"title": ""
},
{
"docid": "neg:1840153_3",
"text": "Graph processing is increasingly used in knowledge economies and in science, in advanced marketing, social networking, bioinformatics, etc. A number of graph-processing systems, including the GPU-enabled Medusa and Totem, have been developed recently. Understanding their performance is key to system selection, tuning, and improvement. Previous performance evaluation studies have been conducted for CPU-based graph-processing systems, such as Graph and GraphX. Unlike them, the performance of GPU-enabled systems is still not thoroughly evaluated and compared. To address this gap, we propose an empirical method for evaluating GPU-enabled graph-processing systems, which includes new performance metrics and a selection of new datasets and algorithms. By selecting 9 diverse graphs and 3 typical graph-processing algorithms, we conduct a comparative performance study of 3 GPU-enabled systems, Medusa, Totem, and MapGraph. We present the first comprehensive evaluation of GPU-enabled systems with results giving insight into raw processing power, performance breakdown into core components, scalability, and the impact on performance of system-specific optimization techniques and of the GPU generation. We present and discuss many findings that would benefit users and developers interested in GPU acceleration for graph processing.",
"title": ""
},
{
"docid": "neg:1840153_4",
"text": "Creating a mobile application often requires the developers to create one for Android och one for iOS, the two leading operating systems for mobile devices. The two applications may have the same layout and logic but several components of the user interface (UI) will differ and the applications themselves need to be developed in two different languages. This process is gruesome since it is time consuming to create two applications and it requires two different sets of knowledge. There have been attempts to create techniques, services or frameworks in order to solve this problem but these hybrids have not been able to provide a native feeling of the resulting applications. This thesis has evaluated the newly released framework React Native that can create both iOS and Android applications by compiling the code written in React. The resulting applications can share code and consists of the UI components which are unique for each platform. The thesis focused on Android and tried to replicate an existing Android application in order to measure user experience and performance. The result was surprisingly positive for React Native as some user could not tell the two applications apart and nearly all users did not mind using a React Native application. The performance evaluation measured GPU frequency, CPU load, memory usage and power consumption. Nearly all measurements displayed a performance advantage for the Android application but the differences were not protruding. The overall experience is that React Native a very interesting framework that can simplify the development process for mobile applications to a high degree. As long as the application itself is not too complex, the development is uncomplicated and one is able to create an application in very short time and be compiled to both Android and iOS. First of all I would like to express my deepest gratitude for Valtech who aided me throughout the whole thesis with books, tools and knowledge. They supplied me with two very competent consultants Alexander Lindholm and Tomas Tunström which made it possible for me to bounce off ideas and in the end having a great thesis. Furthermore, a big thanks to the other students at Talangprogrammet who have supported each other and me during this period of time and made it fun even when it was as most tiresome. Furthermore I would like to thank my examiner Erik Berglund at Linköpings university who has guided me these last months and provided with insightful comments regarding the paper. Ultimately I would like to thank my family who have always been there to support me and especially my little brother who is my main motivation in life.",
"title": ""
},
{
"docid": "neg:1840153_5",
"text": "Programmers often need to reason about how a program evolved between two or more program versions. Reasoning about program changes is challenging as there is a significant gap between how programmers think about changes and how existing program differencing tools represent such changes. For example, even though modification of a locking protocol is conceptually simple and systematic at a code level, diff extracts scattered text additions and deletions per file. To enable programmers to reason about program differences at a high level, this paper proposes a rule-based program differencing approach that automatically discovers and represents systematic changes as logic rules. To demonstrate the viability of this approach, we instantiated this approach at two different abstraction levels in Java: first at the level of application programming interface (API) names and signatures, and second at the level of code elements (e.g., types, methods, and fields) and structural dependences (e.g., method-calls, field-accesses, and subtyping relationships). The benefit of this approach is demonstrated through its application to several open source projects as well as a focus group study with professional software engineers from a large e-commerce company.",
"title": ""
},
{
"docid": "neg:1840153_6",
"text": "The tripeptide glutathione is the thiol compound present in the highest concentration in cells of all organs. Glutathione has many physiological functions including its involvement in the defense against reactive oxygen species. The cells of the human brain consume about 20% of the oxygen utilized by the body but constitute only 2% of the body weight. Consequently, reactive oxygen species which are continuously generated during oxidative metabolism will be generated in high rates within the brain. Therefore, the detoxification of reactive oxygen species is an essential task within the brain and the involvement of the antioxidant glutathione in such processes is very important. The main focus of this review article will be recent results on glutathione metabolism of different brain cell types in culture. The glutathione content of brain cells depends strongly on the availability of precursors for glutathione. Different types of brain cells prefer different extracellular glutathione precursors. Glutathione is involved in the disposal of peroxides by brain cells and in the protection against reactive oxygen species. In coculture astroglial cells protect other neural cell types against the toxicity of various compounds. One mechanism for this interaction is the supply by astroglial cells of glutathione precursors to neighboring cells. Recent results confirm the prominent role of astrocytes in glutathione metabolism and the defense against reactive oxygen species in brain. These results also suggest an involvement of a compromised astroglial glutathione system in the oxidative stress reported for neurological disorders.",
"title": ""
},
{
"docid": "neg:1840153_7",
"text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.",
"title": ""
},
{
"docid": "neg:1840153_8",
"text": "This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.",
"title": ""
},
{
"docid": "neg:1840153_9",
"text": "........................................................................................................................................................ i",
"title": ""
},
{
"docid": "neg:1840153_10",
"text": "To allow the hidden units of a restricted Boltzmann machine to model the transformation between two successive images, Memisevic and Hinton (2007) introduced three-way multiplicative interactions that use the intensity of a pixel in the first image as a multiplicative gain on a learned, symmetric weight between a pixel in the second image and a hidden unit. This creates cubically many parameters, which form a three-dimensional interaction tensor. We describe a low-rank approximation to this interaction tensor that uses a sum of factors, each of which is a three-way outer product. This approximation allows efficient learning of transformations between larger image patches. Since each factor can be viewed as an image filter, the model as a whole learns optimal filter pairs for efficiently representing transformations. We demonstrate the learning of optimal filter pairs from various synthetic and real image sequences. We also show how learning about image transformations allows the model to perform a simple visual analogy task, and we show how a completely unsupervised network trained on transformations perceives multiple motions of transparent dot patterns in the same way as humans.",
"title": ""
},
{
"docid": "neg:1840153_11",
"text": "User behaviour analysis based on traffic log in wireless networks can be beneficial to many fields in real life: not only for commercial purposes, but also for improving network service quality and social management. We cluster users into groups marked by the most frequently visited websites to find their preferences. In this paper, we propose a user behaviour model based on Topic Model from document classification problems. We use the logarithmic TF-IDF (term frequency - inverse document frequency) weighing to form a high-dimensional sparse feature matrix. Then we apply LSA (Latent semantic analysis) to deduce the latent topic distribution and generate a low-dimensional dense feature matrix. K-means++, which is a classic clustering algorithm, is then applied to the dense feature matrix and several interpretable user clusters are found. Moreover, by combining the clustering results with additional demographical information, including age, gender, and financial information, we are able to uncover more realistic implications from the clustering results.",
"title": ""
},
{
"docid": "neg:1840153_12",
"text": "We present the development on an ultra-wideband (UWB) radar system and its signal processing algorithms for detecting human breathing and heartbeat in the paper. The UWB radar system consists of two (Tx and Rx) antennas and one compact CMOS UWB transceiver. Several signal processing techniques are developed for the application. The system has been tested by real measurements.",
"title": ""
},
{
"docid": "neg:1840153_13",
"text": "The Internet is a great discovery for ordinary citizens correspondence. People with criminal personality have found a method for taking individual data without really meeting them and with minimal danger of being gotten. It is called Phishing. Phishing represents a huge threat to the web based business industry. Not just does it smash the certainty of clients towards online business, additionally causes electronic administration suppliers colossal financial misfortune. Subsequently, it is fundamental to think about phishing. This paper gives mindfulness about Phishing assaults and hostile to phishing apparatuses.",
"title": ""
},
{
"docid": "neg:1840153_14",
"text": "Fischler PER •Sequence of tokens mapped to word embeddings. •Bidirectional LSTM builds context-dependent representations for each word. •A small feedforward layer encourages generalisation. •Conditional Random Field (CRF) at the top outputs the most optimal label sequence for the sentence. •Using character-based dynamic embeddings (Rei et al., 2016) to capture morphological patterns and unseen words.",
"title": ""
},
{
"docid": "neg:1840153_15",
"text": "Projects combining agile methods with CMMI combine adaptability with predictability to better serve large customer needs. The introduction of Scrum at Systematic, a CMMI Level 5 company, doubled productivity and cut defects by 40% compared to waterfall projects in 2006 by focusing on early testing and time to fix builds. Systematic institutionalized Scrum across all projects and used data driven tools like story process efficiency to surface Product Backlog impediments. This allowed them to systematically develop a strategy for a second doubling in productivity. Two teams have achieved a sustainable quadrupling of productivity compared to waterfall projects. We discuss here the strategy to bring the entire company to that level. Our experiences shows that Scrum and CMMI together bring a more powerful combination of adaptability and predictability than either one alone and suggest how other companies can combine them to achieve Toyota level performance – 4 times the productivity and 12 times the quality of waterfall teams.",
"title": ""
},
{
"docid": "neg:1840153_16",
"text": "We examine the influence of venture capital on patented inventions in the United States across twenty industries over three decades. We address concerns about causality in several ways, including exploiting a 1979 policy shift that spurred venture capital fundraising. We find that increases in venture capital activity in an industry are associated with significantly higher patenting rates. While the ratio of venture capital to R&D averaged less than 3% from 1983–1992, our estimates suggest that venture capital may have accounted for 8% of industrial innovations in that period.",
"title": ""
},
{
"docid": "neg:1840153_17",
"text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.",
"title": ""
},
{
"docid": "neg:1840153_18",
"text": "In recent years, archaeal diversity surveys have received increasing attention. Brazil is a country known for its natural diversity and variety of biomes, which makes it an interesting sampling site for such studies. However, archaeal communities in natural and impacted Brazilian environments have only recently been investigated. In this review, based on a search on the PubMed database on the last week of April 2016, we present and discuss the results obtained in the 51 studies retrieved, focusing on archaeal communities in water, sediments, and soils of different Brazilian environments. We concluded that, in spite of its vast territory and biomes, the number of publications focusing on archaeal detection and/or characterization in Brazil is still incipient, indicating that these environments still represent a great potential to be explored.",
"title": ""
}
] |
1840154 | Approaches for teaching computational thinking strategies in an educational game: A position paper | [
{
"docid": "pos:1840154_0",
"text": "Computational thinking is gaining recognition as an important skill set for students, both in computer science and other disciplines. Although there has been much focus on this field in recent years, it is rarely taught as a formal course within the curriculum, and there is little consensus on what exactly computational thinking entails and how to teach and evaluate it. To address these concerns, we have developed a computational thinking framework to be used as a planning and evaluative tool. Within this framework, we aim to unify the differing opinions about what computational thinking should involve. As a case study, we have applied the framework to Light-Bot, an educational game with a strong focus on programming, and found that the framework provides us with insight into the usefulness of the game to reinforce computer science concepts.",
"title": ""
}
] | [
{
"docid": "neg:1840154_0",
"text": "Continual Learning in artificial neural networks suffers from interference and forgetting when different tasks are learned sequentially. This paper introduces the Active Long Term Memory Networks (A-LTM), a model of sequential multitask deep learning that is able to maintain previously learned association between sensory input and behavioral output while acquiring knew knowledge. A-LTM exploits the non-convex nature of deep neural networks and actively maintains knowledge of previously learned, inactive tasks using a distillation loss [1]. Distortions of the learned input-output map are penalized but hidden layers are free to transverse towards new local optima that are more favorable for the multi-task objective. We re-frame the McClelland’s seminal Hippocampal theory [2] with respect to Catastrophic Inference (CI) behavior exhibited by modern deep architectures trained with back-propagation and inhomogeneous sampling of latent factors across epochs. We present empirical results of non-trivial CI during continual learning in Deep Linear Networks trained on the same task, in Convolutional Neural Networks when the task shifts from predicting semantic to graphical factors and during domain adaptation from simple to complex environments. We present results of the A-LTM model’s ability to maintain viewpoint recognition learned in the highly controlled iLab-20M [3] dataset with 10 object categories and 88 camera viewpoints, while adapting to the unstructured domain of Imagenet [4] with 1,000 object categories.",
"title": ""
},
{
"docid": "neg:1840154_1",
"text": "present paper introduces an innovative approach to automatically grade the disease on plant leaves. The system effectively inculcates Information and Communication Technology (ICT) in agriculture and hence contributes to Precision Agriculture. Presently, plant pathologists mainly rely on naked eye prediction and a disease scoring scale to grade the disease. This manual grading is not only time consuming but also not feasible. Hence the current paper proposes an image processing based approach to automatically grade the disease spread on plant leaves by employing Fuzzy Logic. The results are proved to be accurate and satisfactory in contrast with manual grading. Keywordscolor image segmentation, disease spot extraction, percent-infection, fuzzy logic, disease grade. INTRODUCTION The sole area that serves the food needs of the entire human race is the Agriculture sector. It has played a key role in the development of human civilization. Plants exist everywhere we live, as well as places without us. Plant disease is one of the crucial causes that reduces quantity and degrades quality of the agricultural products. Plant Pathology is the scientific study of plant diseases caused by pathogens (infectious diseases) and environmental conditions (physiological factors). It involves the study of pathogen identification, disease etiology, disease cycles, economic impact, plant disease epidemiology, plant disease resistance, pathosystem genetics and management of plant diseases. Disease is impairment to the normal state of the plant that modifies or interrupts its vital functions such as photosynthesis, transpiration, pollination, fertilization, germination etc. Plant diseases have turned into a nightmare as it can cause significant reduction in both quality and quantity of agricultural products [2]. Information and Communication Technology (ICT) application is going to be implemented as a solution in improving the status of the agriculture sector [3]. Due to the manifestation and developments in the fields of sensor networks, robotics, GPS technology, communication systems etc, precision agriculture started emerging [10]. The objectives of precision agriculture are profit maximization, agricultural input rationalization and environmental damage reduction by adjusting the agricultural practices to the site demands. In the area of disease management, grade of the disease is determined to provide an accurate and precision treatment advisory. EXISTING SYSTEM: MANUAL GRADING Presently the plant pathologists mainly rely on the naked eye prediction and a disease scoring scale to grade the disease on leaves. There are some problems associated with this manual grading. Diseases are inevitable in plants. When a plant gets affected by the disease, a treatment advisory is required to cure the Arun Kumar R et al, Int. J. Comp. Tech. Appl., Vol 2 (5), 1709-1716 IJCTA | SEPT-OCT 2011 Available online@www.ijcta.com 1709 ISSN:2229-6093",
"title": ""
},
{
"docid": "neg:1840154_2",
"text": "This chapter surveys recent developments in the area of multimedia big data, the biggest big data. One core problem is how to best process this multimedia big data in an efficient and scalable way. We outline examples of the use of the MapReduce framework, including Hadoop, which has become the most common approach to a truly scalable and efficient framework for common multimedia processing tasks, e.g., content analysis and retrieval. We also examine recent developments on deep learning which has produced promising results in large-scale multimedia processing and retrieval. Overall the focus has been on empirical studies rather than the theoretical so as to highlight the most practically successful recent developments and highlight the associated caveats or lessons learned.",
"title": ""
},
{
"docid": "neg:1840154_3",
"text": "The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the “physiological envelope” during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.",
"title": ""
},
{
"docid": "neg:1840154_4",
"text": "Deep Learning is arguably the most rapidly evolving research area in recent years. As a result it is not surprising that the design of state-of-the-art deep neural net models proceeds without much consideration of the latest hardware targets, and the design of neural net accelerators proceeds without much consideration of the characteristics of the latest deep neural net models. Nevertheless, in this paper we show that there are significant improvements available if deep neural net models and neural net accelerators are co-designed. This paper is trimmed to 6 pages to meet the conference requirement. A longer version with more detailed discussion will be released afterwards.",
"title": ""
},
{
"docid": "neg:1840154_5",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
},
{
"docid": "neg:1840154_6",
"text": "Blockchain has proven to be successful in decision making using the streaming live data in various applications, it is the latest form of Information Technology. There are two broad Blockchain categories, public and private. Public Blockchains are very transparent as the data is distributed and can be accessed by anyone within the distributed system. Private Blockchains are restricted and therefore data transfer can only take place in the constrained environment. Using private Blockchains in maintaining private records for managed history or governing regulations can be very effective due to the data and records, or logs being made with respect to particular user or application. The Blockchain system can also gather data records together and transfer them as secure data records to a third party who can then take further actions. In this paper, an automotive road safety case study is reviewed to demonstrate the feasibility of using private Blockchains in the automotive industry. Within this case study anomalies occur when a driver ignores the traffic rules. The Blockchain system itself monitors and logs the behavior of a driver using map layers, geo data, and external rules obtained from the local governing body. As the information is logged the driver’s privacy information is not shared and so it is both accurate and a secure system. Additionally private Blockchains are small systems therefore they are easy to maintain and faster when compared to distributed (public) Blockchains.",
"title": ""
},
{
"docid": "neg:1840154_7",
"text": "Understanding and modifying the effects of arbitrary illumination on human faces in a realistic manner is a challenging problem both for face synthesis and recognition. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using spherical harmonics representation. Morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework, by proposing a 3D spherical harmonic basis morphable model (SHBMM) and demonstrate that any face under arbitrary unknown lighting can be simply represented by three low-dimensional vectors: shape parameters, spherical harmonic basis parameters and illumination coefficients. We show that, with our SHBMM, given one single image under arbitrary unknown lighting, we can remove the illumination effects from the image (face \"delighting\") and synthesize new images under different illumination conditions (face \"re-lighting\"). Furthermore, we demonstrate that cast shadows can be detected and subsequently removed by using the image error between the input image and the corresponding rendered image. We also propose two illumination invariant face recognition methods based on the recovered SHBMM parameters and the de-lit images respectively. Experimental results show that using only a single image of a face under unknown lighting, we can achieve high recognition rates and generate photorealistic images of the face under a wide range of illumination conditions, including multiple sources of illumination.",
"title": ""
},
{
"docid": "neg:1840154_8",
"text": "Artificial intelligence (AI) will have many profound societal effects It promises potential benefits (and may also pose risks) in education, defense, business, law, and science In this article we explore how AI is likely to affect employment and the distribution of income. We argue that AI will indeed reduce drastically the need fol human toil We also note that some people fear the automation of work hy machines and the resulting unemployment Yet, since the majority of us probably would rather use our time for activities other than our present jobs, we ought thus to greet the work-eliminating consequences of AI enthusiastically The paper discusses two reasons, one economic and one psychological, for this paradoxical apprehension We conclude with a discussion of problems of moving toward the kind of economy that will he enahled by developments in AI ARTIFICIAL INTELLIGENCE [Al] and other developments in computer science are giving birth to a dramatically different class of machinesPmachines that can perform tasks requiring reasoning, judgment, and perception that previously could be done only by humans. Will these I am grateful for the helpful comments provided by many people Specifically I would like to acknowledge the advice teceived from Sandra Cook and Victor Walling of SRI, Wassily Leontief and Faye Duchin of the New York University Institute for Economic Analysis, Margaret Boden of The University of Sussex, Henry Levin and Charles Holloway of Stanford University, James Albus of the National Bureau of Standards, and Peter Hart of Syntelligence Herbert Simon, of CarnegieMellon Univetsity, wrote me extensive criticisms and rebuttals of my arguments Robert Solow of MIT was quite skeptical of my premises, but conceded nevertheless that my conclusions could possibly follow from them if certain other economic conditions were satisfied. Save1 Kliachko of SRI improved my composition and also referred me to a prescient article by Keynes (Keynes, 1933) who, a half-century ago, predicted an end to toil within one hundred years machines reduce the need for human toil and thus cause unemployment? There are two opposing views in response to this question Some claim that AI is not really very different from other technologies that have supported automation and increased productivitytechnologies such as mechanical engineering, ele&onics, control engineering, and operations rcsearch. Like them, AI may also lead ultimately to an expanding economy with a concomitant expansion of employment opportunities. At worst, according to this view, thcrc will be some, perhaps even substantial shifts in the types of jobs, but certainly no overall reduction in the total number of jobs. In my opinion, however, such an out,come is based on an overly conservative appraisal of the real potential of artificial intelligence. Others accept a rather strong hypothesis with regard to AI-one that sets AI far apart from previous labor-saving technologies. Quite simply, this hypothesis affirms that anything people can do, AI can do as well. Cert,ainly AI has not yet achieved human-level performance in many important functions, but many AI scientists believe that artificial intelligence inevitably will equal and surpass human mental abilities-if not in twenty years, then surely in fifty. The main conclusion of this view of AI is that, even if AI does create more work, this work can also be performed by AI devices without necessarily implying more jobs for humans Of course, the mcrc fact that some work can be performed automatically does not make it inevitable that it, will be. Automation depends on many factorsPeconomic, political, and social. The major economic parameter would seem to be the relative cost of having either people or machines execute a given task (at a specified rate and level of quality) In THE AI MAGAZINE Summer 1984 5 AI Magazine Volume 5 Number 2 (1984) (© AAAI)",
"title": ""
},
{
"docid": "neg:1840154_9",
"text": "The output of high-level synthesis typically consists of a netlist of generic RTL components and a state sequencing table. While module generators and logic synthesis tools can be used to map RTL components into standard cells or layout geometries, they cannot provide technology mapping into the data book libraries of functional RTL cells used commonly throughout the industrial design community. In this paper, we introduce an approach to implementing generic RTL components with technology-specific RTL library cells. This approach addresses the criticism of designers who feel that high-level synthesis tools should be used in conjunction with existing RTL data books. We describe how GENUS, a library of generic RTL components, is organized for use in high-level synthesis and how DTAS, a functional synthesis system, is used to map GENUS components into RTL library cells.",
"title": ""
},
{
"docid": "neg:1840154_10",
"text": "This paper presents a likelihood-based methodology for a probabilistic representation of a stochastic quantity for which only sparse point data and/or interval data may be available. The likelihood function is evaluated from the probability density function (PDF) for sparse point data and the cumulative distribution function for interval data. The full likelihood function is used in this paper to calculate the entire PDF of the distribution parameters. The uncertainty in the distribution parameters is integrated to calculate a single PDF for the quantity of interest. The approach is then extended to non-parametric PDFs, wherein the entire distribution can be discretized at a finite number of points and the probability density values at these points can be inferred using the principle of maximum likelihood, thus avoiding the assumption of any particular distribution. The proposed approach is demonstrated with challenge problems from the Sandia Epistemic Uncertainty Workshop and the results are compared with those of previous studies that pursued different approaches to represent and propagate interval description of",
"title": ""
},
{
"docid": "neg:1840154_11",
"text": "Increased demands on implementation of wireless sensor networks in automation praxis result in relatively new wireless standard – ZigBee. The new workplace was established on the Department of Electronics and Multimedia Communications (DEMC) in order to keep up with ZigBee modern trend. This paper presents the first results and experiences associated with ZigBee based wireless sensor networking. The accent was put on suitable chipset platform selection for Home Automation wireless network purposes. Four popular microcontrollers was selected to investigate memory requirements and power consumption such as ARM, x51, HCS08, and Coldfire. Next objective was to test interoperability between various manufacturers’ platforms, what is important feature of ZigBee standard. A simple network based on ZigBee physical layer as well as ZigBee compliant network were made to confirm the basic ZigBee interoperability.",
"title": ""
},
{
"docid": "neg:1840154_12",
"text": "While it is still most common for information visualization researchers to develop new visualizations from a data-or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design.",
"title": ""
},
{
"docid": "neg:1840154_13",
"text": "The requirement to perform complicated statistic analysis of big data by institutions of engineering, scientific research, health care, commerce, banking and computer research is immense. However, the limitations of the widely used current desktop software like R, excel, minitab and spss gives a researcher limitation to deal with big data. The big data analytic tools like IBM Big Insight, Revolution Analytics, and tableau software are commercial and heavily license. Still, to deal with big data, client has to invest in infrastructure, installation and maintenance of hadoop cluster to deploy these analytical tools. Apache Hadoop is an open source distributed computing framework that uses commodity hardware. With this project, I intend to collaborate Apache Hadoop and R software over the on the Cloud. Objective is to build a SaaS (Software-as-a-Service) analytic platform that stores & analyzes big data using open source Apache Hadoop and open source R software. The benefits of this cloud based big data analytical service are user friendliness & cost as it is developed using open-source software. The system is cloud based so users have their own space in cloud where user can store there data. User can browse data, files, folders using browser and arrange datasets. User can select dataset and analyze required dataset and store result back to cloud storage. Enterprise with a cloud environment can save cost of hardware, upgrading software, maintenance or network configuration, thus it making it more economical.",
"title": ""
},
{
"docid": "neg:1840154_14",
"text": "Worker reliability is a longstanding issue in crowdsourcing, and the automatic discovery of high quality workers is an important practical problem. Most previous work on this problem mainly focuses on estimating the quality of each individual worker jointly with the true answer of each task. However, in practice, for some tasks, worker quality could be associated with some explicit characteristics of the worker, such as education level, major and age. So the following question arises: how do we automatically discover related worker attributes for a given task, and further utilize the findings to improve data quality? In this paper, we propose a general crowd targeting framework that can automatically discover, for a given task, if any group of workers based on their attributes have higher quality on average; and target such groups, if they exist, for future work on the same task. Our crowd targeting framework is complementary to traditional worker quality estimation approaches. Furthermore, an advantage of our framework is that it is more budget efficient because we are able to target potentially good workers before they actually do the task. Experiments on real datasets show that the accuracy of final prediction can be improved significantly for the same budget (or even less budget in some cases). Our framework can be applied to many real word tasks and can be easily integrated in current crowdsourcing platforms.",
"title": ""
},
{
"docid": "neg:1840154_15",
"text": "The rapid proliferation of the Internet and the cost-effective growth of its key enabling technologies are revolutionizing information technology and creating unprecedented opportunities for developing largescale distributed applications. At the same time, there is a growing concern over the security of Web-based applications, which are rapidly being deployed over the Internet [4]. For example, e-commerce—the leading Web-based application—is projected to have a market exceeding $1 trillion over the next several years. However, this application has already become a security nightmare for both customers and business enterprises as indicated by the recent episodes involving unauthorized access to credit card information. Other leading Web-based applications with considerable information security and privacy issues include telemedicine-based health-care services and online services or businesses involving both public and private sectors. Many of these applications are supported by workflow management systems (WFMSs) [1]. A large number of public and private enterprises are in the forefront of adopting Internetbased WFMSs and finding ways to improve their services and decision-making processes, hence we are faced with the daunting challenge of ensuring the security and privacy of information in such Web-based applications [4]. Typically, a Web-based application can be represented as a three-tier architecture, depicted in the figure, which includes a Web client, network servers, and a back-end information system supported by a suite of databases. For transaction-oriented applications, such as e-commerce, middleware is usually provided between the network servers and back-end systems to ensure proper interoperability. Considerable security challenges and vulnerabilities exist within each component of this architecture. Existing public-key infrastructures (PKIs) provide encryption mechanisms for ensuring information confidentiality, as well as digital signature techniques for authentication, data integrity and non-repudiation [11]. As no access authorization services are provided in this approach, it has a rather limited scope for Web-based applications. The strong need for information security on the Internet is attributable to several factors, including the massive interconnection of heterogeneous and distributed systems, the availability of high volumes of sensitive information at the end systems maintained by corporations and government agencies, easy distribution of automated malicious software by malfeasors, the ease with which computer crimes can be committed anonymously from across geographic boundaries, and the lack of forensic evidence in computer crimes, which makes the detection and prosecution of criminals extremely difficult. Two classes of services are crucial for a secure Internet infrastructure. These include access control services and communication security services. Access James B.D. Joshi,",
"title": ""
},
{
"docid": "neg:1840154_16",
"text": "Current devices have limited battery life, typically lasting less than one day. This can lead to situations where critical tasks, such as making an emergency phone call, are not possible. Other devices, supporting different functionality, may have sufficient battery life to enable this task. We present PowerShake; an exploration of power as a shareable commodity between mobile (and wearable) devices. PowerShake enables users to control the balance of power levels in their own devices (intra-personal transactions) and to trade power with others (inter-personal transactions) according to their ongoing usage requirements. This paper demonstrates Wireless Power Transfer (WPT) between mobile devices. PowerShake is: simple to perform on-the-go; supports ongoing/continuous tasks (transferring at ~3.1W); fits in a small form factor; and is compliant with electromagnetic safety guidelines while providing charging efficiency similar to other standards (48.2% vs. 51.2% in Qi). Based on our proposed technical implementation, we run a series of workshops to derive candidate designs for PowerShake enabled devices and interactions, and to bring to light the social implications of power as a tradable asset.",
"title": ""
},
{
"docid": "neg:1840154_17",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "neg:1840154_18",
"text": "Victor Hugo suggested the possibility that patterns created by the movement of grains of sand are in no small part responsible for the shape and feel of the natural world in which we live. No one can seriously doubt that granular materials, of which sand is but one example, are ubiquitous in our daily lives. They play an important role in many of our industries, such as mining, agriculture, and construction. They clearly are also important for geological processes where landslides, erosion, and, on a related but much larger scale, plate tectonics determine much of the morphology of Earth. Practically everything that we eat started out in a granular form, and all the clutter on our desks is often so close to the angle of repose that a chance perturbation will create an avalanche onto the floor. Moreover, Hugo hinted at the extreme sensitivity of the macroscopic world to the precise motion or packing of the individual grains. We may nevertheless think that he has overstepped the bounds of common sense when he related the creation of worlds to the movement of simple grains of sand. By the end of this article, we hope to have shown such an enormous richness and complexity to granular motion that Hugo’s metaphor might no longer appear farfetched and could have a literal meaning: what happens to a pile of sand on a table top is relevant to processes taking place on an astrophysical scale. Granular materials are simple: they are large conglomerations of discrete macroscopic particles. If they are noncohesive, then the forces between them are only repulsive so that the shape of the material is determined by external boundaries and gravity. If the grains are dry, any interstitial fluid, such as air, can often be neglected in determining many, but not all, of the flow and static properties of the system. Yet despite this seeming simplicity, a granular material behaves differently from any of the other familiar forms of matter—solids, liquids, or gases—and should therefore be considered an additional state of matter in its own right. In this article, we shall examine in turn the unusual behavior that granular material displays when it is considered to be a solid, liquid, or gas. For example, a sand pile at rest with a slope lower than the angle of repose, as in Fig. 1(a), behaves like a solid: the material remains at rest even though gravitational forces create macroscopic stresses on its surface. If the pile is tilted several degrees above the angle of repose, grains start to flow, as seen in Fig. 1(b). However, this flow is clearly not that of an ordinary fluid because it only exists in a boundary layer at the pile’s surface with no movement in the bulk at all. (Slurries, where grains are mixed with a liquid, have a phenomenology equally complex as the dry powders we shall describe in this article.) There are two particularly important aspects that contribute to the unique properties of granular materials: ordinary temperature plays no role, and the interactions between grains are dissipative because of static friction and the inelasticity of collisions. We might at first be tempted to view any granular flow as that of a dense gas since gases, too, consist of discrete particles with negligible cohesive forces between them. In contrast to ordinary gases, however, the energy scale kBT is insignificant here. The relevant energy scale is the potential energy mgd of a grain of mass m raised by its own diameter d in the Earth’s gravity g . For typical sand, this energy is at least 1012 times kBT at room temperature. Because kBT is irrelevant, ordinary thermodynamic arguments become useless. For example, many studies have shown (Williams, 1976; Rosato et al., 1987; Fan et al., 1990; Jullien et al., 1992; Duran et al., 1993; Knight et al., 1993; Savage, 1993; Zik et al., 1994; Hill and Kakalios, 1994; Metcalfe et al., 1995) that vibrations or rotations of a granular material will induce particles of different sizes to separate into different regions of the container. Since there are no attractive forces between",
"title": ""
},
{
"docid": "neg:1840154_19",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
}
] |
1840155 | The Latent Relation Mapping Engine: Algorithm and Experiments | [
{
"docid": "pos:1840155_0",
"text": "There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This article introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) The patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM.",
"title": ""
}
] | [
{
"docid": "neg:1840155_0",
"text": "Inverse reinforcement learning addresses the general problem of recovering a reward function from samples of a policy provided by an expert/demonstrator. In this paper, we introduce active learning for inverse reinforcement learning. We propose an algorithm that allows the agent to query the demonstrator for samples at specific states, instead of relying only on samples provided at “arbitrary” states. The purpose of our algorithm is to estimate the reward function with similar accuracy as other methods from the literature while reducing the amount of policy samples required from the expert. We also discuss the use of our algorithm in higher dimensional problems, using both Monte Carlo and gradient methods. We present illustrative results of our algorithm in several simulated examples of different complexities.",
"title": ""
},
{
"docid": "neg:1840155_1",
"text": "Implementing controls in the car becomes a major challenge: The use of simple physical buttons does not scale to the increased number of assistive, comfort, and infotainment functions. Current solutions include hierarchical menus and multi-functional control devices, which increase complexity and visual demand. Another option is speech control, which is not widely accepted, as it does not support visibility of actions, fine-grained feedback, and easy undo of actions. Our approach combines speech and gestures. By using speech for identification of functions, we exploit the visibility of objects in the car (e.g., mirror) and simple access to a wide range of functions equaling a very broad menu. Using gestures for manipulation (e.g., left/right), we provide fine-grained control with immediate feedback and easy undo of actions. In a user-centered process, we determined a set of user-defined gestures as well as common voice commands. For a prototype, we linked this to a car interior and driving simulator. In a study with 16 participants, we explored the impact of this form of multimodal interaction on the driving performance against a baseline using physical buttons. The results indicate that the use of speech and gesture is slower than using buttons but results in a similar driving performance. Users comment in a DALI questionnaire that the visual demand is lower when using speech and gestures.",
"title": ""
},
{
"docid": "neg:1840155_2",
"text": "Single image super resolution (SISR) is to reconstruct a high resolution image from a single low resolution image. The SISR task has been a very attractive research topic over the last two decades. In recent years, convolutional neural network (CNN) based models have achieved great performance on SISR task. Despite the breakthroughs achieved by using CNN models, there are still some problems remaining unsolved, such as how to recover high frequency details of high resolution images. Previous CNN based models always use a pixel wise loss, such as l2 loss. Although the high resolution images constructed by these models have high peak signal-to-noise ratio (PSNR), they often tend to be blurry and lack high-frequency details, especially at a large scaling factor. In this paper, we build a super resolution perceptual generative adversarial network (SRPGAN) framework for SISR tasks. In the framework, we propose a robust perceptual loss based on the discriminator of the built SRPGAN model. We use the Charbonnier loss function to build the content loss and combine it with the proposed perceptual loss and the adversarial loss. Compared with other state-of-the-art methods, our method has demonstrated great ability to construct images with sharp edges and rich details. We also evaluate our method on different benchmarks and compare it with previous CNN based methods. The results show that our method can achieve much higher structural similarity index (SSIM) scores on most of the benchmarks than the previous state-of-art methods.",
"title": ""
},
{
"docid": "neg:1840155_3",
"text": "The intelligence community (IC) is asked to predict outcomes that may often be inherently unpredictable-and is blamed for the inevitable forecasting failures, be they false positives or false negatives. To move beyond blame games of accountability ping-pong that incentivize bureaucratic symbolism over substantive reform, it is necessary to reach bipartisan agreements on performance indicators that are transparent enough to reassure clashing elites (to whom the IC must answer) that estimates have not been politicized. Establishing such transideological credibility requires (a) developing accuracy metrics for decoupling probability and value judgments; (b) using the resulting metrics as criterion variables in validity tests of the IC's selection, training, and incentive systems; and (c) institutionalizing adversarial collaborations that conduct level-playing-field tests of clashing perspectives.",
"title": ""
},
{
"docid": "neg:1840155_4",
"text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.",
"title": ""
},
{
"docid": "neg:1840155_5",
"text": "BACKGROUND\nPhysician burnout has reached epidemic levels, as documented in national studies of both physicians in training and practising physicians. The consequences are negative effects on patient care, professionalism, physicians' own care and safety, and the viability of health-care systems. A more complete understanding than at present of the quality and outcomes of the literature on approaches to prevent and reduce burnout is necessary.\n\n\nMETHODS\nIn this systematic review and meta-analysis, we searched MEDLINE, Embase, PsycINFO, Scopus, Web of Science, and the Education Resources Information Center from inception to Jan 15, 2016, for studies of interventions to prevent and reduce physician burnout, including single-arm pre-post comparison studies. We required studies to provide physician-specific burnout data using burnout measures with validity support from commonly accepted sources of evidence. We excluded studies of medical students and non-physician health-care providers. We considered potential eligibility of the abstracts and extracted data from eligible studies using a standardised form. Outcomes were changes in overall burnout, emotional exhaustion score (and high emotional exhaustion), and depersonalisation score (and high depersonalisation). We used random-effects models to calculate pooled mean difference estimates for changes in each outcome.\n\n\nFINDINGS\nWe identified 2617 articles, of which 15 randomised trials including 716 physicians and 37 cohort studies including 2914 physicians met inclusion criteria. Overall burnout decreased from 54% to 44% (difference 10% [95% CI 5-14]; p<0·0001; I2=15%; 14 studies), emotional exhaustion score decreased from 23·82 points to 21·17 points (2·65 points [1·67-3·64]; p<0·0001; I2=82%; 40 studies), and depersonalisation score decreased from 9·05 to 8·41 (0·64 points [0·15-1·14]; p=0·01; I2=58%; 36 studies). High emotional exhaustion decreased from 38% to 24% (14% [11-18]; p<0·0001; I2=0%; 21 studies) and high depersonalisation decreased from 38% to 34% (4% [0-8]; p=0·04; I2=0%; 16 studies).\n\n\nINTERPRETATION\nThe literature indicates that both individual-focused and structural or organisational strategies can result in clinically meaningful reductions in burnout among physicians. Further research is needed to establish which interventions are most effective in specific populations, as well as how individual and organisational solutions might be combined to deliver even greater improvements in physician wellbeing than those achieved with individual solutions.\n\n\nFUNDING\nArnold P Gold Foundation Research Institute.",
"title": ""
},
{
"docid": "neg:1840155_6",
"text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.",
"title": ""
},
{
"docid": "neg:1840155_7",
"text": "Training of discrete latent variable models remains challenging because passing gradient information through discrete units is difficult. We propose a new class of smoothing transformations based on a mixture of two overlapping distributions, and show that the proposed transformation can be used for training binary latent models with either directed or undirected priors. We derive a new variational bound to efficiently train with Boltzmann machine priors. Using this bound, we develop DVAE++, a generative model with a global discrete prior and a hierarchy of convolutional continuous variables. Experiments on several benchmarks show that overlapping transformations outperform other recent continuous relaxations of discrete latent variables including Gumbel-Softmax (Maddison et al., 2016; Jang et al., 2016), and discrete variational autoencoders (Rolfe, 2016).",
"title": ""
},
{
"docid": "neg:1840155_8",
"text": "Across the world, organizations have teams gathering threat data to protect themselves from incoming cyber attacks and maintain a strong cyber security posture. Teams are also sharing information, because along with the data collected internally, organizations need external information to have a comprehensive view of the threat landscape. The information about cyber threats comes from a variety of sources, including sharing communities, open-source and commercial sources, and it spans many different levels and timescales. Immediately actionable information are often low-level indicators of compromise, such as known malware hash values or command-and-control IP addresses, where an actionable response can be executed automatically by a system. Threat intelligence refers to more complex cyber threat information that has been acquired or inferred through the analysis of existing information. Information such as the different malware families used over time with an attack or the network of threat actors involved in an attack, is valuable information and can be vital to understanding and predicting attacks, threat developments, as well as informing law enforcement investigations. This information is also actionable, but on a longer time scale. Moreover, it requires action and decision-making at the human level. There is a need for effective intelligence management platforms to facilitate the generation, refinement, and vetting of data, post sharing. In designing such a system, some of the key challenges that exist include: working with multiple intelligence sources, combining and enriching data for greater intelligence, determining intelligence relevance based on technical constructs, and organizational input, delivery into organizational workflows and into technological products. This paper discusses these challenges encountered and summarizes the community requirements and expectations for an all-encompassing Threat Intelligence Management Platform. The requirements expressed in this paper, when implemented, will serve as building blocks to create systems that can maximize value out of a set of collected intelligence and translate those findings into action for a broad range of stakeholders.",
"title": ""
},
{
"docid": "neg:1840155_9",
"text": "This note provides a family of classification problems, indexed by a positive integer k, where all shallow networks with fewer than exponentially (in k) many nodes exhibit error at least 1/3, whereas a deep network with 2 nodes in each of 2k layers achieves zero error, as does a recurrent network with 3 distinct nodes iterated k times. The proof is elementary, and the networks are standard feedforward networks with ReLU (Rectified Linear Unit) nonlinearities.",
"title": ""
},
{
"docid": "neg:1840155_10",
"text": "The componential theory of creativity is a comprehensive model of the social and psychological components necessary for an individual to produce creative work. The theory is grounded in a definition of creativity as the production of ideas or outcomes that are both novel and appropriate to some goal. In this theory, four components are necessary for any creative response: three components within the individual – domainrelevant skills, creativity-relevant processes, and intrinsic task motivation – and one component outside the individual – the social environment in which the individual is working. The current version of the theory encompasses organizational creativity and innovation, carrying implications for the work environments created by managers. This entry defines the components of creativity and how they influence the creative process, describing modifications to the theory over time. Then, after comparing the componential theory to other creativity theories, the article describes this theory’s evolution and impact.",
"title": ""
},
{
"docid": "neg:1840155_11",
"text": "Training a Fully Convolutional Network (FCN) for semantic segmentation requires a large number of pixel-level masks, which involves a large amount of human labour and time for annotation. In contrast, image-level labels are much easier to obtain. In this work, we propose a novel method for weakly supervised semantic segmentation with only image-level labels. The method relies on a large scale co-segmentation framework that can produce object masks for a group of images containing objects belonging to the same semantic class. We first retrieve images from search engines, e.g. Flickr and Google, using semantic class names as queries, e.g. class names in PASCAL VOC 2012. We then use high quality masks produced by co-segmentation on the retrieved images as well as the target dataset images with image level labels to train segmentation networks. We obtain IoU 56.9 on test set of PASCAL VOC 2012, which reaches state of the art performance.",
"title": ""
},
{
"docid": "neg:1840155_12",
"text": "The computation of page importance in a huge dynamic graph has recently attracted a lot of attention because of the web. Page importance or page rank is defined as the fixpoint of a matrix equation. Previous algorithms compute it off-line and require the use of a lot of extra CPU as well as disk resources in particular to store and maintain the link matrix of the web. We briefly discuss a new algorithm that works on-line, and uses much less resources. In particular, it does not require storing the link matrix. It is on-line in that it continuously refines its estimate of page importance while the web/graph is visited. When the web changes, page importance changes as well. We modify the algorithm so that it adapts dynamically to changes of the web. We report on experiments on web data and on synthetic data.",
"title": ""
},
{
"docid": "neg:1840155_13",
"text": "This paper presents new experimental results of angle of arrival (AoA) measurements for localizing passive RFID tags in the UHF frequency range. The localization system is based on the principle of a phased array with electronic beam steering mechanism. This approach has been successfully applied within a UHF RFID system and it allows the precise determination of the angle and the position of small passive RFID tags. The paper explains the basic principle, the experimental setup with the phased array and shows results of the measurements.",
"title": ""
},
{
"docid": "neg:1840155_14",
"text": "This paper discusses the latest developments in the optimization and fabrication of 3.3kV SiC vertical DMOSFETs. The devices show superior on-state and switching losses compared to the even the latest generation of 3.3kV fast Si IGBTs and promise to extend the upper switching frequency of high-voltage power conversion systems beyond several tens of kHz without the need to increase part count with 3-level converter stacks of faster 1.7kV IGBTs.",
"title": ""
},
{
"docid": "neg:1840155_15",
"text": "We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call “percepts” using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features.",
"title": ""
},
{
"docid": "neg:1840155_16",
"text": "Continuous opinion dynamics optimizer (CODO) is an algorithm based on human collective opinion formation process for solving continuous optimization problems. In this paper, we have studied the impact of topology and introduction of leaders in the society on the optimization performance of CODO. We have introduced three new variants of CODO and studied the efficacy of algorithms on several benchmark functions. Experimentation demonstrates that scale free CODO performs significantly better than all algorithms. Also, the role played by individuals with different degrees during the optimization process is studied.",
"title": ""
},
{
"docid": "neg:1840155_17",
"text": "Machine-to-Machine (M2M) paradigm enables machines (sensors, actuators, robots, and smart meter readers) to communicate with each other with little or no human intervention. M2M is a key enabling technology for the cyber-physical systems (CPSs). This paper explores CPS beyond M2M concept and looks at futuristic applications. Our vision is CPS with distributed actuation and in-network processing. We describe few particular use cases that motivate the development of the M2M communication primitives tailored to large-scale CPS. M2M communications in literature were considered in limited extent so far. The existing work is based on small-scale M2M models and centralized solutions. Different sources discuss different primitives. Few existing decentralized solutions do not scale well. There is a need to design M2M communication primitives that will scale to thousands and trillions of M2M devices, without sacrificing solution quality. The main paradigm shift is to design localized algorithms, where CPS nodes make decisions based on local knowledge. Localized coordination and communication in networked robotics, for matching events and robots, were studied to illustrate new directions.",
"title": ""
},
{
"docid": "neg:1840155_18",
"text": "CONIKS is a proposed key transparency system which enables a centralized service provider to maintain an auditable yet privacypreserving directory of users’ public keys. In the original CONIKS design, users must monitor that their data is correctly included in every published snapshot of the directory, necessitating either slow updates or trust in an unspecified third-party to audit that the data structure has stayed consistent. We demonstrate that the data structures for CONIKS are very similar to those used in Ethereum, a consensus computation platform with a Turing-complete programming environment. We can take advantage of this to embed the core CONIKS data structures into an Ethereum contract with only minor modifications. Users may then trust the Ethereum network to audit the data structure for consistency and non-equivocation. Users who do not trust (or are unaware of) Ethereum can self-audit the CONIKS data structure as before. We have implemented a prototype contract for our hybrid EthIKS scheme, demonstrating that it adds only modest bandwidth overhead to CONIKS proofs and costs hundredths of pennies per key update in fees at today’s rates.",
"title": ""
},
{
"docid": "neg:1840155_19",
"text": "This paper addresses the problem of identifying likely topics of texts by their position in the text. It describes the automated training and evaluation of an Optimal Position Policy, a method of locating the likely positions of topic-bearing sentences based on genre-speci c regularities of discourse structure. This method can be used in applications such as information retrieval, routing, and text summarization.",
"title": ""
}
] |
1840156 | Heuristic Feature Selection for Clickbait Detection | [
{
"docid": "pos:1840156_0",
"text": "This paper reports on the PAN 2014 evaluation lab which hosts three shared tasks on plagiarism detection, author identification, and author profiling. To improve the reproducibility of shared tasks in general, and PAN’s tasks in particular, the Webis group developed a new web service called TIRA, which facilitates software submissions. Unlike many other labs, PAN asks participants to submit running softwares instead of their run output. To deal with the organizational overhead involved in handling software submissions, the TIRA experimentation platform helps to significantly reduce the workload for both participants and organizers, whereas the submitted softwares are kept in a running state. This year, we addressed the matter of responsibility of successful execution of submitted softwares in order to put participants back in charge of executing their software at our site. In sum, 57 softwares have been submitted to our lab; together with the 58 software submissions of last year, this forms the largest collection of softwares for our three tasks to date, all of which are readily available for further analysis. The report concludes with a brief summary of each task.",
"title": ""
},
{
"docid": "pos:1840156_1",
"text": "Clickbait has become a nuisance on social media. To address the urging task of clickbait detection, we constructed a new corpus of 38,517 annotated Twitter tweets, the Webis Clickbait Corpus 2017. To avoid biases in terms of publisher and topic, tweets were sampled from the top 27 most retweeted news publishers, covering a period of 150 days. Each tweet has been annotated on 4-point scale by five annotators recruited at Amazon’s Mechanical Turk. The corpus has been employed to evaluate 12 clickbait detectors submitted to the Clickbait Challenge 2017. Download: https://webis.de/data/webis-clickbait-17.html Challenge: https://clickbait-challenge.org",
"title": ""
}
] | [
{
"docid": "neg:1840156_0",
"text": "The purpose of this study was to investigate the effect of the ultrasonic cavitation versus low level laser therapy in the treatment of abdominal adiposity in female post gastric bypass. Subjects: Sixty female suffering from localized fat deposits at the abdomen area after gastric bypass were divided randomly and equally into three equal groups Group (1): were received low level laser therapy plus bicycle exercises and abdominal exercises for 3 months, Group (2): were received ultrasonic cavitation therapy plus bicycle exercises and abdominal exercises for 3 months, and Group (3): were received bicycle exercises and abdominal exercises for 3 months. Methods: data were obtained for each patient from waist circumferences, skin fold and ultrasonography measurements were done after six weeks postoperative (preexercise) and at three months postoperative. The physical therapy program began, six weeks postoperative for experimental group. Including aerobic exercises performed on the stationary bicycle, for 30 min, 3 sessions per week for three months Results: showed a statistically significant decrease in waist circumferences, skin fold and ultrasonography measurements in the three groups, with a higher rate of reduction in Group (1) and Group (2) .Also there was a non-significant difference between Group (1) and Group (2). Conclusion: these results suggested that bothlow level laser therapy and ultrasonic cavitation had a significant effect on abdominal adiposity after gastric bypass in female.",
"title": ""
},
{
"docid": "neg:1840156_1",
"text": "Mindfiilness meditation is an increasingly popular intervention for the treatment of physical illnesses and psychological difficulties. Using intervention strategies with mechanisms familiar to cognitive behavioral therapists, the principles and practice of mindfijlness meditation offer promise for promoting many of the most basic elements of positive psychology. It is proposed that mindfulness meditation promotes positive adjustment by strengthening metacognitive skills and by changing schemas related to emotion, health, and illness. Additionally, the benefits of yoga as a mindfulness practice are explored. Even though much empirical work is needed to determine the parameters of mindfulness meditation's benefits, and the mechanisms by which it may achieve these benefits, theory and data thus far clearly suggest the promise of mindfulness as a link between positive psychology and cognitive behavioral therapies.",
"title": ""
},
{
"docid": "neg:1840156_2",
"text": "We show that the topological modular functor from Witten–Chern–Simons theory is universal for quantum computation in the sense that a quantum circuit computation can be efficiently approximated by an intertwining action of a braid on the functor’s state space. A computational model based on Chern–Simons theory at a fifth root of unity is defined and shown to be polynomially equivalent to the quantum circuit model. The chief technical advance: the density of the irreducible sectors of the Jones representation has topological implications which will be considered elsewhere.",
"title": ""
},
{
"docid": "neg:1840156_3",
"text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.",
"title": ""
},
{
"docid": "neg:1840156_4",
"text": "A hstruct -The concept of a super value node is developed to estend the theor? of influence diagrams to allow dynamic programming to be performed within this graphical modeling framework. The operations necessa? to exploit the presence of these nodes and efficiently analyze the models are developed. The key result is that by reprewnting value function separability in the structure of the graph of the influence diagram. formulation is simplified and operations on the model can take advantage of the wparability. Froni the decision analysis perspective. this allows simple exploitation of separabilih in the value function of a decision problem which can significantly reduce memory and computation requirements. Importantly. this allows algorithms to be designed to solve influence diagrams that automatically recognize the opportunih for applying dynamic programming. From the decision processes perspective, influence diagrams with super value nodes allow efficient formulation and solution of nonstandard decision process structures. They a h allow the exploitation of conditional independence between state variables. Examples are provided that demonstrate these advantages.",
"title": ""
},
{
"docid": "neg:1840156_5",
"text": "Recently, convolutional neural networks have demonstrated excellent performance on various visual tasks, including the classification of common two-dimensional images. In this paper, deep convolutional neural networks are employed to classify hyperspectral images directly in spectral domain. More specifically, the architecture of the proposed classifier contains five layers with weights which are the input layer, the convolutional layer, the max pooling layer, the full connection layer, and the output layer. These five layers are implemented on each spectral signature to discriminate against others. Experimental results based on several hyperspectral image data sets demonstrate that the proposed method can achieve better classification performance than some traditional methods, such as support vector machines and the conventional deep learning-based methods.",
"title": ""
},
{
"docid": "neg:1840156_6",
"text": "We present an algorithm for computing rigorous solutions to a large class of ordinary differential equations. The main algorithm is based on a partitioning process and the use of interval arithmetic with directed rounding. As an application, we prove that the Lorenz equations support a strange attractor, as conjectured by Edward Lorenz in 1963. This conjecture was recently listed by Steven Smale as one of several challenging problems for the twenty-first century. We also prove that the attractor is robust, i.e., it persists under small perturbations of the coefficients in the underlying differential equations. Furthermore, the flow of the equations admits a unique SRB measure, whose support coincides with the attractor. The proof is based on a combination of normal form theory and rigorous computations.",
"title": ""
},
{
"docid": "neg:1840156_7",
"text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.",
"title": ""
},
{
"docid": "neg:1840156_8",
"text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …",
"title": ""
},
{
"docid": "neg:1840156_9",
"text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.",
"title": ""
},
{
"docid": "neg:1840156_10",
"text": "Music is capable of evoking exceptionally strong emotions and of reliably affecting the mood of individuals. Functional neuroimaging and lesion studies show that music-evoked emotions can modulate activity in virtually all limbic and paralimbic brain structures. These structures are crucially involved in the initiation, generation, detection, maintenance, regulation and termination of emotions that have survival value for the individual and the species. Therefore, at least some music-evoked emotions involve the very core of evolutionarily adaptive neuroaffective mechanisms. Because dysfunctions in these structures are related to emotional disorders, a better understanding of music-evoked emotions and their neural correlates can lead to a more systematic and effective use of music in therapy.",
"title": ""
},
{
"docid": "neg:1840156_11",
"text": "Many companies are deploying services largely based on machine-learning algorithms for sophisticated processing of large amounts of data, either for consumers or industry. The state-of-the-art and most popular such machine-learning algorithms are Convolutional and Deep Neural Networks (CNNs and DNNs), which are known to be computationally and memory intensive. A number of neural network accelerators have been recently proposed which can offer high computational capacity/area ratio, but which remain hampered by memory accesses. However, unlike the memory wall faced by processors on general-purpose workloads, the CNNs and DNNs memory footprint, while large, is not beyond the capability of the on-chip storage of a multi-chip system. This property, combined with the CNN/DNN algorithmic characteristics, can lead to high internal bandwidth and low external communications, which can in turn enable high-degree parallelism at a reasonable area cost. In this article, we introduce a custom multi-chip machine-learning architecture along those lines, and evaluate performance by integrating electrical and optical inter-chip interconnects separately. We show that, on a subset of the largest known neural network layers, it is possible to achieve a speedup of 656.63× over a GPU, and reduce the energy by 184.05× on average for a 64-chip system. We implement the node down to the place and route at 28 nm, containing a combination of custom storage and computational units, with electrical inter-chip interconnects.",
"title": ""
},
{
"docid": "neg:1840156_12",
"text": "Skeem for their thoughtful comments and suggestions.",
"title": ""
},
{
"docid": "neg:1840156_13",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "neg:1840156_14",
"text": "This paper presents a novel mechatronics master-slave setup for hand telerehabilitation. The system consists of a sensorized glove acting as a remote master and a powered hand exoskeleton acting as a slave. The proposed architecture presents three main innovative solutions. First, it provides the therapist with an intuitive interface (a sensorized wearable glove) for conducting the rehabilitation exercises. Second, the patient can benefit from a robot-aided physical rehabilitation in which the slave hand robotic exoskeleton can provide an effective treatment outside the clinical environment without the physical presence of the therapist. Third, the mechatronics setup is integrated with a sensorized object, which allows for the execution of manipulation exercises and the recording of patient's improvements. In this paper, we also present the results of the experimental characterization carried out to verify the system usability of the proposed architecture with healthy volunteers.",
"title": ""
},
{
"docid": "neg:1840156_15",
"text": "Limited research has been done on exoskeletons to enable faster movements of the lower extremities. An exoskeleton’s mechanism can actually hinder agility by adding weight, inertia and friction to the legs; compensating inertia through control is particularly difficult due to instability issues. The added inertia will reduce the natural frequency of the legs, probably leading to lower step frequency during walking. We present a control method that produces an approximate compensation of an exoskeleton’s inertia. The aim is making the natural frequency of the exoskeleton-assisted leg larger than that of the unaided leg. The method uses admittance control to compensate the weight and friction of the exoskeleton. Inertia compensation is emulated by adding a feedback loop consisting of low-pass filtered acceleration multiplied by a negative gain. This gain simulates negative inertia in the low-frequency range. We tested the controller on a statically supported, single-DOF exoskeleton that assists swing movements of the leg. Subjects performed movement sequences, first unassisted and then using the exoskeleton, in the context of a computer-based task resembling a race. With zero inertia compensation, the steady-state frequency of leg swing was consistently reduced. Adding inertia compensation enabled subjects to recover their normal frequency of swing.",
"title": ""
},
{
"docid": "neg:1840156_16",
"text": "Aesthetic quality estimation of an image is a challenging task. In this paper, we introduce a deep CNN approach to tackle this problem. We adopt the sate-of-the-art object-recognition CNN as our baseline model, and adapt it for handling several high-level attributes. The networks capable of dealing with these high-level concepts are then fused by a learned logical connector for predicting the aesthetic rating. Results on the standard benchmark shows the effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840156_17",
"text": "We now described an interesting application of SVD to text do cuments. Suppose we represent documents as a bag of words, soXij is the number of times word j occurs in document i, for j = 1 : W andi = 1 : D, where W is the number of words and D is the number of documents. To find a document that contains a g iven word, we can use standard search procedures, but this can get confuse d by ynonomy (different words with the same meaning) andpolysemy (same word with different meanings). An alternative approa ch is to assume that X was generated by some low dimensional latent representation X̂ ∈ IR, whereK is the number of latent dimensions. If we compare documents in the latent space, we should get improved retrie val performance, because words of similar meaning get mapped to similar low dimensional locations. We can compute a low dimensional representation of X by computing the SVD, and then taking the top k singular values/ vectors: 1",
"title": ""
},
{
"docid": "neg:1840156_18",
"text": "This paper proposes a new novel snubberless current-fed half-bridge front-end isolated dc/dc converter-based inverter for photovoltaic applications. It is suitable for grid-tied (utility interface) as well as off-grid (standalone) application based on the mode of control. The proposed converter attains clamping of the device voltage by secondary modulation, thus eliminating the need of snubber or active-clamp. Zero-current switching or natural commutation of primary devices and zero-voltage switching of secondary devices is achieved. Soft-switching is inherent owing to the proposed secondary modulation and is maintained during wide variation in voltage and power transfer capacity and thus is suitable for photovoltaic (PV) applications. Primary device voltage is clamped at reflected output voltage, and secondary device voltage is clamped at output voltage. Steady-state operation and analysis, and design procedure are presented. Simulation results using PSIM 9.0 are given to verify the proposed analysis and design. An experimental converter prototype rated at 200 W has been designed, built, and tested in the laboratory to verify and demonstrate the converter performance over wide variations in input voltage and output power for PV applications. The proposed converter is a true isolated boost converter and has higher voltage conversion (boost) ratio compared to the conventional active-clamped converter.",
"title": ""
},
{
"docid": "neg:1840156_19",
"text": "Estimating the flows of rivers can have significant economic impact, as this can help in agricultural water management and in protection from water shortages and possible flood damage. The first goal of this paper is to apply neural networks to the problem of forecasting the flow of the River Nile in Egypt. The second goal of the paper is to utilize the time series as a benchmark to compare between several neural-network forecasting methods.We compare between four different methods to preprocess the inputs and outputs, including a novel method proposed here based on the discrete Fourier series. We also compare between three different methods for the multistep ahead forecast problem: the direct method, the recursive method, and the recursive method trained using a backpropagation through time scheme. We also include a theoretical comparison between these three methods. The final comparison is between different methods to perform longer horizon forecast, and that includes ways to partition the problem into the several subproblems of forecasting K steps ahead.",
"title": ""
}
] |
1840157 | Prior image constrained compressed sensing (PICCS): a method to accurately reconstruct dynamic CT images from highly undersampled projection data sets. | [
{
"docid": "pos:1840157_0",
"text": "The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the pixel representation; other, more complicated images have a sparse representation in some transform domain-for example, in terms of spatial finite-differences or their wavelet coefficients. According to the recently developed mathematical theory of compressed-sensing, images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used. Intuitively, artifacts due to random undersampling add as noise-like interference. In the sparse transform domain the significant coefficients stand out above the interference. A nonlinear thresholding scheme can recover the sparse coefficients, effectively recovering the image itself. In this article, practical incoherent undersampling schemes are developed and analyzed by means of their aliasing interference. Incoherence is introduced by pseudo-random variable-density undersampling of phase-encodes. The reconstruction is performed by minimizing the l(1) norm of a transformed image, subject to data fidelity constraints. Examples demonstrate improved spatial resolution and accelerated acquisition for multislice fast spin-echo brain imaging and 3D contrast enhanced angiography.",
"title": ""
}
] | [
{
"docid": "neg:1840157_0",
"text": "This paper presents the RF telecommunications system designed for the New Horizons mission, NASA’s planned mission to Pluto, with focus on new technologies developed to meet mission requirements. These technologies include an advanced digital receiver — a mission-enabler for its low DC power consumption at 2.3 W secondary power. The receiver is one-half of a card-based transceiver that is incorporated with other spacecraft functions into an integrated electronics module, providing further reductions in mass and power. Other developments include extending APL’s long and successful flight history in ultrastable oscillators (USOs) with an updated design for lower DC power. These USOs offer frequency stabilities to 1 part in 10, stabilities necessary to support New Horizons’ uplink radio science experiment. In antennas, the 2.1 meter high gain antenna makes use of shaped suband main reflectors to improve system performance and achieve a gain approaching 44 dBic. New Horizons would also be the first deep-space mission to fly a regenerative ranging system, offering up to a 30 dB performance improvement over sequential ranging, especially at long ranges. The paper will provide an overview of the current system design and development and performance details on the new technologies mentioned above. Other elements of the telecommunications system will also be discussed. Note: New Horizons is NASA’s planned mission to Pluto, and has not been approved for launch. All representations made in this paper are contingent on a decision by NASA to go forward with the preparation for and launch of the mission.",
"title": ""
},
{
"docid": "neg:1840157_1",
"text": "This paper describes the integration of the Alice 3D virtual worlds environment into many disciplines in elementary school, middle school and high school. We have developed a wide range of Alice instructional materials including tutorials for both computer science concepts and animation concepts. To encourage the building of more complicated worlds, we have developed template Alice classes and worlds. With our materials, teachers and students are exposed to computing concepts while using Alice to create projects, stories, games and quizzes. These materials were successfully used in the summers 2008 and 2009 in training and working with over 130 teachers.",
"title": ""
},
{
"docid": "neg:1840157_2",
"text": "In the field of voice therapy, perceptual evaluation is widely used by expert listeners as a way to evaluate pathological and normal voice quality. This approach is understandably subjective as it is subject to listeners’ bias which high interand intra-listeners variability can be found. As such, research on automatic assessment of pathological voices using a combination of subjective and objective analyses emerged. The present study aimed to develop a complementary automatic assessment system for voice quality based on the well-known GRBAS scale by using a battery of multidimensional acoustical measures through Deep Neural Networks. A total of 44 dimensionality parameters including Mel-frequency Cepstral Coefficients, Smoothed Cepstral Peak Prominence and Long-Term Average Spectrum was adopted. In addition, the state-of-the-art automatic assessment system based on Modulation Spectrum (MS) features and GMM classifiers was used as comparison system. The classification results using the proposed method revealed a moderate correlation with subjective GRBAS scores of dysphonic severity, and yielded a better performance than MS-GMM system, with the best accuracy around 81.53%. The findings indicate that such assessment system can be used as an appropriate evaluation tool in determining the presence and severity of voice disorders.",
"title": ""
},
{
"docid": "neg:1840157_3",
"text": "Web-based enterprises process events generated by millions of users interacting with their websites. Rich statistical data distilled from combining such interactions in near real-time generates enormous business value. In this paper, we describe the architecture of Photon, a geographically distributed system for joining multiple continuously flowing streams of data in real-time with high scalability and low latency, where the streams may be unordered or delayed. The system fully tolerates infrastructure degradation and datacenter-level outages without any manual intervention. Photon guarantees that there will be no duplicates in the joined output (at-most-once semantics) at any point in time, that most joinable events will be present in the output in real-time (near-exact semantics), and exactly-once semantics eventually.\n Photon is deployed within Google Advertising System to join data streams such as web search queries and user clicks on advertisements. It produces joined logs that are used to derive key business metrics, including billing for advertisers. Our production deployment processes millions of events per minute at peak with an average end-to-end latency of less than 10 seconds. We also present challenges and solutions in maintaining large persistent state across geographically distant locations, and highlight the design principles that emerged from our experience.",
"title": ""
},
{
"docid": "neg:1840157_4",
"text": "We study weakly-supervised video object grounding: given a video segment and a corresponding descriptive sentence, the goal is to localize objects that are mentioned from the sentence in the video. During training, no object bounding boxes are available, but the set of possible objects to be grounded is known beforehand. Existing approaches in the image domain use Multiple Instance Learning (MIL) to ground objects by enforcing matches between visual and semantic features. A naive extension of this approach to the video domain is to treat the entire segment as a bag of spatial object proposals. However, an object existing sparsely across multiple frames might not be detected completely since successfully spotting it from one single frame would trigger a satisfactory match. To this end, we propagate the weak supervisory signal from the segment level to frames that likely contain the target object. For frames that are unlikely to contain the target objects, we use an alternative penalty loss. We also leverage the interactions among objects as a textual guide for the grounding. We evaluate our model on the newlycollected benchmark YouCook2-BoundingBox and show improvements over competitive baselines.",
"title": ""
},
{
"docid": "neg:1840157_5",
"text": "We consider the online metric matching problem. In this prob lem, we are given a graph with edge weights satisfying the triangl e inequality, andk vertices that are designated as the right side of the matchin g. Over time up tok requests arrive at an arbitrary subset of vertices in the gra ph and each vertex must be matched to a right side vertex immediately upon arrival. A vertex cannot be rematched to another vertex once it is matched. The goal is to minimize the total weight of the matching. We give aO(log k) competitive randomized algorithm for the problem. This improves upon the best known guarantee of O(log k) due to Meyerson, Nanavati and Poplawski [19]. It is well known that no deterministic al gorithm can have a competitive less than 2k − 1, and that no randomized algorithm can have a competitive ratio of less than l k.",
"title": ""
},
{
"docid": "neg:1840157_6",
"text": "In this work we perform experiments with the recently published work on Capsule Networks. Capsule Networks have been shown to deliver state of the art performance for MNIST and claim to have greater discriminative power than Convolutional Neural Networks for special tasks, such as recognizing overlapping digits. The authors of Capsule Networks have evaluated datasets with low number of categories, viz. MNIST, CIFAR-10, SVHN among others. We evaluate capsule networks on two datasets viz. Traffic Signals, Food101, and CIFAR10 with less number of iterations, making changes to the architecture to account for RGB images. Traditional techniques like dropout, batch normalization were applied to capsule networks for performance evaluation.",
"title": ""
},
{
"docid": "neg:1840157_7",
"text": "Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets",
"title": ""
},
{
"docid": "neg:1840157_8",
"text": "Dual stripline routing is more and more widely used in the modern high speed PCB design due to its cost advantage of reduced overall layer count. However, the major challenge of a successful dual stripline design is to handle the additional interferences introduced by the signals on adjacent layers. This paper studies the crosstalk effect of the dual stripline with both parallel and angled routing, and proposes design solutions to tackle the challenge. Analytical and empirical algorithms are proposed to estimate the crosstalk waveforms from multiple aggressors, which provide quick design risk assessment, and the waveform is well correlated to the 3D full wave EM simulation results.",
"title": ""
},
{
"docid": "neg:1840157_9",
"text": "Straightforward application of Deep Belief Nets (DBNs) to acoustic modeling produces a rich distributed representation of speech data that is useful for recognition and yields impressive results on the speaker-independent TIMIT phone recognition task. However, the first-layer Gaussian-Bernoulli Restricted Boltzmann Machine (GRBM) has an important limitation, shared with mixtures of diagonalcovariance Gaussians: GRBMs treat different components of the acoustic input vector as conditionally independent given the hidden state. The mean-covariance restricted Boltzmann machine (mcRBM), first introduced for modeling natural images, is a much more representationally efficient and powerful way of modeling the covariance structure of speech data. Every configuration of the precision units of the mcRBM specifies a different precision matrix for the conditional distribution over the acoustic space. In this work, we use the mcRBM to learn features of speech data that serve as input into a standard DBN. The mcRBM features combined with DBNs allow us to achieve a phone error rate of 20.5%, which is superior to all published results on speaker-independent TIMIT to date.",
"title": ""
},
{
"docid": "neg:1840157_10",
"text": "This review paper focusses on DESMO-J, a comprehensive and stable Java-based open-source simulation library. DESMO-J is recommended in numerous academic publications for implementing discrete event simulation models for various applications. The library was integrated into several commercial software products. DESMO-J’s functional range and usability is continuously improved by the Department of Informatics of the University of Hamburg (Germany). The paper summarizes DESMO-J’s core functionality and important design decisions. It also compares DESMO-J to other discrete event simulation frameworks. Furthermore, latest developments and new opportunities are addressed in more detail. These include a) improvements relating to the quality and applicability of the software itself, e.g. a port to .NET, b) optional extension packages like visualization libraries and c) new components facilitating a more powerful and flexible simulation logic, like adaption to real time or a compact representation of production chains and similar queuing systems. Finally, the paper exemplarily describes how to apply DESMO-J to harbor logistics and business process modeling, thus providing insights into DESMO-J practice.",
"title": ""
},
{
"docid": "neg:1840157_11",
"text": "In the last decade, deep learning algorithms have become very popular thanks to the achieved performance in many machine learning and computer vision tasks. However, most of the deep learning architectures are vulnerable to so called adversarial examples. This questions the security of deep neural networks (DNN) for many securityand trust-sensitive domains. The majority of the proposed existing adversarial attacks are based on the differentiability of the DNN cost function. Defence strategies are mostly based on machine learning and signal processing principles that either try to detect-reject or filter out the adversarial perturbations and completely neglect the classical cryptographic component in the defence. In this work, we propose a new defence mechanism based on the second Kerckhoffs’s cryptographic principle which states that the defence and classification algorithm are supposed to be known, but not the key. To be compliant with the assumption that the attacker does not have access to the secret key, we will primarily focus on a gray-box scenario and do not address a white-box one. More particularly, we assume that the attacker does not have direct access to the secret block, but (a) he completely knows the system architecture, (b) he has access to the data used for training and testing and (c) he can observe the output of the classifier for each given input. We show empirically that our system is efficient against most famous state-of-the-art attacks in black-box and gray-box scenarios.",
"title": ""
},
{
"docid": "neg:1840157_12",
"text": "Protocol reverse engineering has often been a manual process that is considered time-consuming, tedious and error-prone. To address this limitation, a number of solutions have recently been proposed to allow for automatic protocol reverse engineering. Unfortunately, they are either limited in extracting protocol fields due to lack of program semantics in network traces or primitive in only revealing the flat structure of protocol format. In this paper, we present a system called AutoFormat that aims at not only extracting protocol fields with high accuracy, but also revealing the inherently “non-flat”, hierarchical structures of protocol messages. AutoFormat is based on the key insight that different protocol fields in the same message are typically handled in different execution contexts (e.g., the runtime call stack). As such, by monitoring the program execution, we can collect the execution context information for every message byte (annotated with its offset in the entire message) and cluster them to derive the protocol format. We have evaluated our system with more than 30 protocol messages from seven protocols, including two text-based protocols (HTTP and SIP), three binary-based protocols (DHCP, RIP, and OSPF), one hybrid protocol (CIFS/SMB), as well as one unknown protocol used by a real-world malware. Our results show that AutoFormat can not only identify individual message fields automatically and with high accuracy (an average 93.4% match ratio compared with Wireshark), but also unveil the structure of the protocol format by revealing possible relations (e.g., sequential, parallel, and hierarchical) among the message fields. Part of this research has been supported by the National Science Foundation under grants CNS-0716376 and CNS-0716444. The bulk of this work was performed when the first author was visiting George Mason University in Summer 2007.",
"title": ""
},
{
"docid": "neg:1840157_13",
"text": "The effectiveness of the treatment of breast cancer depends on its timely detection. An early step in the diagnosis is the cytological examination of breast material obtained directly from the tumor. This work reports on advances in computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies to characterize these biopsies as either benign or malignant. Instead of relying on the accurate segmentation of cell nuclei, the nuclei are estimated by circles using the circular Hough transform. The resulting circles are then filtered to keep only high-quality estimations for further analysis by a support vector machine which classifies detected circles as correct or incorrect on the basis of texture features and the percentage of nuclei pixels according to a nuclei mask obtained using Otsu's thresholding method. A set of 25 features of the nuclei is used in the classification of the biopsies by four different classifiers. The complete diagnostic procedure was tested on 737 microscopic images of fine needle biopsies obtained from patients and achieved 98.51% effectiveness. The results presented in this paper demonstrate that a computerized medical diagnosis system based on our method would be effective, providing valuable, accurate diagnostic information.",
"title": ""
},
{
"docid": "neg:1840157_14",
"text": "The density of neustonic plastic particles was compared to that of zooplankton in the coastal ocean near Long Beach, California. Two trawl surveys were conducted, one after an extended dry period when there was little land-based runoff, the second shortly after a storm when runoff was extensive. On each survey, neuston samples were collected at five sites along a transect parallel to shore using a manta trawl lined with 333 micro mesh. Average plastic density during the study was 8 pieces per cubic meter, though density after the storm was seven times that prior to the storm. The mass of plastics was also higher after the storm, though the storm effect on mass was less than it was for density, reflecting a smaller average size of plastic particles after the storm. The average mass of plastic was two and a half times greater than that of plankton, and even greater after the storm. The spatial pattern of the ratio also differed before and after a storm. Before the storm, greatest plastic to plankton ratios were observed at two stations closest to shore, whereas after the storm these had the lowest ratios.",
"title": ""
},
{
"docid": "neg:1840157_15",
"text": "In this paper, we present a system including a novel component called programmable aperture and two associated post-processing algorithms for high-quality light field acquisition. The shape of the programmable aperture can be adjusted and used to capture light field at full sensor resolution through multiple exposures without any additional optics and without moving the camera. High acquisition efficiency is achieved by employing an optimal multiplexing scheme, and quality data is obtained by using the two post-processing algorithms designed for self calibration of photometric distortion and for multi-view depth estimation. View-dependent depth maps thus generated help boost the angular resolution of light field. Various post-exposure photographic effects are given to demonstrate the effectiveness of the system and the quality of the captured light field.",
"title": ""
},
{
"docid": "neg:1840157_16",
"text": "This paper proposes an architecture for the mapping between syntax and phonology – in particular, that aspect of phonology that determines the linear ordering of words. We propose that linearization is restricted in two key ways. (1) the relative ordering of words is fixed at the end of each phase, or ‘‘Spell-out domain’’; and (2) ordering established in an earlier phase may not be revised or contradicted in a later phase. As a consequence, overt extraction out of a phase P may apply only if the result leaves unchanged the precedence relations established in P. We argue first that this architecture (‘‘cyclic linearization’’) gives us a means of understanding the reasons for successive-cyclic movement. We then turn our attention to more specific predictions of the proposal: in particular, the e¤ects of Holmberg’s Generalization on Scandinavian Object Shift; and also the Inverse Holmberg Effects found in Scandinavian ‘‘Quantifier Movement’’ constructions (Rögnvaldsson (1987); Jónsson (1996); Svenonius (2000)) and in Korean scrambling configurations (Ko (2003, 2004)). The cyclic linearization proposal makes predictions that cross-cut the details of particular syntactic configurations. For example, whether an apparent case of verb fronting results from V-to-C movement or from ‘‘remnant movement’’ of a VP whose complements have been removed by other processes, the verb should still be required to precede its complements after fronting if it preceded them before fronting according to an ordering established at an earlier phase. We argue that ‘‘cross-construction’’ consistency of this sort is in fact found.",
"title": ""
},
{
"docid": "neg:1840157_17",
"text": "This session studies specific challenges that Machine Learning (ML) algorithms have to tackle when faced with Big Data problems. These challenges can arise when any of the dimensions in a ML problem grows significantly: a) size of training set, b) size of test set or c) dimensionality. The studies included in this edition explore the extension of previous ML algorithms and practices to Big Data scenarios. Namely, specific algorithms for recurrent neural network training, ensemble learning, anomaly detection and clustering are proposed. The results obtained show that this new trend of ML problems presents both a challenge and an opportunity to obtain results which could allow ML to be integrated in many new applications in years to come.",
"title": ""
},
{
"docid": "neg:1840157_18",
"text": "This paper presents a study of passive Dickson based envelope detectors operating in the quadratic small signal regime, specifically intended to be used in RF front end of sensing units of IoE sensor nodes. Critical parameters such as open-circuit voltage sensitivity (OCVS), charge time, input impedance, and output noise are studied and simplified circuit models are proposed to predict the behavior of the detector, resulting in practical design intuitions. There is strong agreement between model predictions, simulation results and measurements of 15 representative test structures that were fabricated in a 130 nm RF CMOS process.",
"title": ""
},
{
"docid": "neg:1840157_19",
"text": "In this paper we describe AntHocNet, an algorithm for routing in mobile ad hoc networks. It is a hybrid algorithm, which combines reactive path setup with proactive path probing, maintenance and improvement. The algorithm is based on the Nature-inspired Ant Colony Optimization framework. Paths are learned by guided Monte Carlo sampling using ant-like agents communicating in a stigmergic way. In an extensive set of simulation experiments, we compare AntHocNet with AODV, a reference algorithm in the field. We show that our algorithm can outperform AODV on different evaluation criteria. AntHocNet’s performance advantage is visible over a broad range of possible network scenarios, and increases for larger, sparser and more mobile networks.",
"title": ""
}
] |
1840158 | A high efficiency low cost direct battery balancing circuit using a multi-winding transformer with reduced switch count | [
{
"docid": "pos:1840158_0",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
}
] | [
{
"docid": "neg:1840158_0",
"text": "The purpose of this investigation was to examine the influence of upper-body static stretching and dynamic stretching on upper-body muscular performance. Eleven healthy men, who were National Collegiate Athletic Association Division I track and field athletes (age, 19.6 +/- 1.7 years; body mass, 93.7 +/- 13.8 kg; height, 183.6 +/- 4.6 cm; bench press 1 repetition maximum [1RM], 106.2 +/- 23.0 kg), participated in this study. Over 4 sessions, subjects participated in 4 different stretching protocols (i.e., no stretching, static stretching, dynamic stretching, and combined static and dynamic stretching) in a balanced randomized order followed by 4 tests: 30% of 1 RM bench throw, isometric bench press, overhead medicine ball throw, and lateral medicine ball throw. Depending on the exercise, test peak power (Pmax), peak force (Fmax), peak acceleration (Amax), peak velocity (Vmax), and peak displacement (Dmax) were measured. There were no differences among stretch trials for Pmax, Fmax, Amax, Vmax, or Dmax for the bench throw or for Fmax for the isometric bench press. For the overhead medicine ball throw, there were no differences among stretch trials for Vmax or Dmax. For the lateral medicine ball throw, there was no difference in Vmax among stretch trials; however, Dmax was significantly larger (p </= 0.05) for the static and dynamic condition compared to the static-only condition. In general, there was no short-term effect of stretching on upper-body muscular performance in young adult male athletes, regardless of stretch mode, potentially due to the amount of rest used after stretching before the performances. Since throwing performance was largely unaffected by static or dynamic upper-body stretching, athletes competing in the field events could perform upper-body stretching, if enough time were allowed before the performance. However, prior studies on lower-body musculature have demonstrated dramatic negative effects on speed and power. Therefore, it is recommended that a dynamic warm-up be used for the entire warm-up.",
"title": ""
},
{
"docid": "neg:1840158_1",
"text": "The appearance of Agile methods has been the most noticeable change to software process thinking in the last fifteen years [16], but in fact many of the “Agile ideas” have been around since 70’s or even before. Many studies and reviews have been conducted about Agile methods which ascribe their emergence as a reaction against traditional methods. In this paper, we argue that although Agile methods are new as a whole, they have strong roots in the history of software engineering. In addition to the iterative and incremental approaches that have been in use since 1957 [21], people who criticised the traditional methods suggested alternative approaches which were actually Agile ideas such as the response to change, customer involvement, and working software over documentation. The authors of this paper believe that education about the history of Agile thinking will help to develop better understanding as well as promoting the use of Agile methods. We therefore present and discuss the reasons behind the development and introduction of Agile methods, as a reaction to traditional methods, as a result of people's experience, and in particular focusing on reusing ideas from history.",
"title": ""
},
{
"docid": "neg:1840158_2",
"text": "A cascade of fully convolutional neural networks is proposed to segment multi-modal Magnetic Resonance (MR) images with brain tumor into background and three hierarchical regions: whole tumor, tumor core and enhancing tumor core. The cascade is designed to decompose the multi-class segmentation problem into a sequence of three binary segmentation problems according to the subregion hierarchy. The whole tumor is segmented in the first step and the bounding box of the result is used for the tumor core segmentation in the second step. The enhancing tumor core is then segmented based on the bounding box of the tumor core segmentation result. Our networks consist of multiple layers of anisotropic and dilated convolution filters, and they are combined with multi-view fusion to reduce false positives. Residual connections and multi-scale predictions are employed in these networks to boost the segmentation performance. Experiments with BraTS 2017 validation set show that the proposed method achieved average Dice scores of 0.7859, 0.9050, 0.8378 for enhancing tumor core, whole tumor and tumor core, respectively. The corresponding values for BraTS 2017 testing set were 0.7831, 0.8739, and 0.7748, respectively.",
"title": ""
},
{
"docid": "neg:1840158_3",
"text": "We present in this paper the language NoFun for stating component quality in the framework of the ISO/IEC quality standards. The language consists of three different parts. In the first one, software quality characteristics and attributes are defined, probably in a hiera rchical manner. As part of this definition, abstract quality models can be formulated and fu rther refined into more specialised ones. In the second part, values are assigned to component quality basic attributes. In the third one, quality requirements can be stated over components, both context-free (universal quality properties) and context-dependent (quality properties for a given framework -software domain, company, project, etc.). Last, we address to the translation of the language to UML, using its extension mechanisms for capturing the fundamental non-functional concepts.",
"title": ""
},
{
"docid": "neg:1840158_4",
"text": "The conceptualization of a distinct construct known as statistics anxiety has led to the development of numerous rating scales, including the Statistical Anxiety Rating Scale (STARS), designed to assess levels of statistics anxiety. In the current study, the STARS was administered to a sample of 423 undergraduate and graduate students from a midsized, western United States university. The Rasch measurement rating scale model was used to analyze scores from the STARS. Misfitting items were removed from the analysis. In general, items from the six subscales represented a broad range of abilities, with the major exception being a lack of items at the lower extremes of the subscales. Additionally, a differential item functioning (DIF) analysis was performed across sex and student classification. Several items displayed DIF, which indicates subgroups may ascribe different meanings to those items. The paper concludes with several recommendations for researchers considering using the STARS.",
"title": ""
},
{
"docid": "neg:1840158_5",
"text": "Paraphrase identification is an important topic in artificial intelligence and this task is often tackled as sequence alignment and matching. Traditional alignment methods take advantage of attention mechanism, which is a soft-max weighting technique. Weighting technique could pick out the most similar/dissimilar parts, but is weak in modeling the aligned unmatched parts, which are the crucial evidence to identify paraphrase. In this paper, we empower neural architecture with Hungarian algorithm to extract the aligned unmatched parts. Specifically, first, our model applies BiLSTM to parse the input sentences into hidden representations. Then, Hungarian layer leverages the hidden representations to extract the aligned unmatched parts. Last, we apply cosine similarity to metric the aligned unmatched parts for a final discrimination. Extensive experiments show that our model outperforms other baselines, substantially and significantly.",
"title": ""
},
{
"docid": "neg:1840158_6",
"text": "This paper presents a pilot-based compensation algorithm for mitigation of frequency-selective I/Q imbalances in direct-conversion OFDM transmitters. By deploying a feedback loop from RF to baseband, together with a properly-designed pilot signal structure, the I/Q imbalance properties of the transmitter are efficiently estimated in a subcarrier-wise manner. Based on the obtained I/Q imbalance knowledge, the imbalance effects on the actual transmit waveform are then mitigated by baseband pre-distortion acting on the mirror-subcarrier signals. The compensation performance of the proposed structure is analyzed using extensive computer simulations, indicating that very high image rejection ratios can be achieved in practical system set-ups with reasonable pilot signal lengths.",
"title": ""
},
{
"docid": "neg:1840158_7",
"text": "Since the middle ages, essential oils have been widely used for bactericidal, virucidal, fungicidal, antiparasitical, insecticidal, medicinal and cosmetic applications, especially nowadays in pharmaceutical, sanitary, cosmetic, agricultural and food industries. Because of the mode of extraction, mostly by distillation from aromatic plants, they contain a variety of volatile molecules such as terpenes and terpenoids, phenol-derived aromatic components and aliphatic components. In vitro physicochemical assays characterise most of them as antioxidants. However, recent work shows that in eukaryotic cells, essential oils can act as prooxidants affecting inner cell membranes and organelles such as mitochondria. Depending on type and concentration, they exhibit cytotoxic effects on living cells but are usually non-genotoxic. In some cases, changes in intracellular redox potential and mitochondrial dysfunction induced by essential oils can be associated with their capacity to exert antigenotoxic effects. These findings suggest that, at least in part, the encountered beneficial effects of essential oils are due to prooxidant effects on the cellular level.",
"title": ""
},
{
"docid": "neg:1840158_8",
"text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.",
"title": ""
},
{
"docid": "neg:1840158_9",
"text": "Carsharing has emerged as an alternative to vehicle ownership and is a rapidly expanding global market. Particularly through the flexibility of free-floating models, car sharing complements public transport since customers do not need to return cars to specific stations. We present a novel data analytics approach that provides decision support to car sharing operators -- from local start-ups to global players -- in maneuvering this constantly growing and changing market environment. Using a large set of rental data, as well as zero-inflated and geographically weighted regression models, we derive indicators for the attractiveness of certain areas based on points of interest in their vicinity. These indicators are valuable for a variety of operational and strategic decisions. As a demonstration project, we present a case study of Berlin, where the indicators are used to identify promising regions for business area expansion.",
"title": ""
},
{
"docid": "neg:1840158_10",
"text": "BACKGROUND\nAbnormal scar development following burn injury can cause substantial physical and psychological distress to children and their families. Common burn scar prevention and management techniques include silicone therapy, pressure garment therapy, or a combination of both. Currently, no definitive, high-quality evidence is available for the effectiveness of topical silicone gel or pressure garment therapy for the prevention and management of burn scars in the paediatric population. Thus, this study aims to determine the effectiveness of these treatments in children.\n\n\nMETHODS\nA randomised controlled trial will be conducted at a large tertiary metropolitan children's hospital in Australia. Participants will be randomised to one of three groups: Strataderm® topical silicone gel only, pressure garment therapy only, or combined Strataderm® topical silicone gel and pressure garment therapy. Participants will include 135 children (45 per group) up to 16 years of age who are referred for scar management for a new burn. Children up to 18 years of age will also be recruited following surgery for burn scar reconstruction. Primary outcomes are scar itch intensity and scar thickness. Secondary outcomes include scar characteristics (e.g. colour, pigmentation, pliability, pain), the patient's, caregiver's and therapist's overall opinion of the scar, health service costs, adherence, health-related quality of life, treatment satisfaction and adverse effects. Measures will be completed on up to two sites per person at baseline and 1 week post scar management commencement, 3 months and 6 months post burn, or post burn scar reconstruction. Data will be analysed using descriptive statistics and univariate and multivariate regression analyses.\n\n\nDISCUSSION\nResults of this study will determine the effectiveness of three noninvasive scar interventions in children at risk of, and with, scarring post burn or post reconstruction.\n\n\nTRIAL REGISTRATION\nAustralian New Zealand Clinical Trials Registry, ACTRN12616001100482 . Registered on 5 August 2016.",
"title": ""
},
{
"docid": "neg:1840158_11",
"text": "Gasification is one of the promising technologies to convert biomass to gaseous fuels for distributed power generation. However, the commercial exploitation of biomass energy suffers from a number of logistics and technological challenges. In this review, the barriers in each of the steps from the collection of biomass to electricity generation are highlighted. The effects of parameters in supply chain management, pretreatment and conversion of biomass to gas, and cleaning and utilization of gas for power generation are discussed. Based on the studies, until recently, the gasification of biomass and gas cleaning are the most challenging part. For electricity generation, either using engine or gas turbine requires a stringent specification of gas composition and tar concentration in the product gas. Different types of updraft and downdraft gasifiers have been developed for gasification and a number of physical and catalytic tar separation methods have been investigated. However, the most efficient and popular one is yet to be developed for commercial purpose. In fact, the efficient gasification and gas cleaning methods can produce highly burnable gas with less tar content, so as to reduce the total consumption of biomass for a desired quantity of electricity generation. According to the recent report, an advanced gasification method with efficient tar cleaning can significantly reduce the biomass consumption, and thus the logistics and biomass pretreatment problems can be ultimately reduced. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840158_12",
"text": "Conflicting guidelines for excisions about the alar base led us to develop calibrated alar base excision, a modification of Weir's approach. In approximately 20% of 1500 rhinoplasties this technique was utilized as a final step. Of these patients, 95% had lateral wallexcess (“tall nostrils”), 2% had nostril floor excess (“wide nostrils”), 2% had a combination of these (“tall-wide nostrils”), and 1% had thick nostril rims. Lateral wall excess length is corrected by a truncated crescent excision of the lateral wall above the alar crease. Nasal floor excess is improved by an excision of the nasal sill. Combination noses (e.g., tall-wide) are approached with a combination alar base excision. Finally, noses with thick rims are improved with diamond excision. Closure of the excision is accomplished with fine simple external sutures. Electrocautery is unnecessary and deep sutures are utilized only in wide noses. Few complications were noted. Benefits of this approach include straightforward surgical guidelines, a natural-appearing correction, avoidance of notching or obvious scarring, and it is quick and simple.",
"title": ""
},
{
"docid": "neg:1840158_13",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
},
{
"docid": "neg:1840158_14",
"text": "Monte Carlo Tree Search methods have led to huge progress in Computer Go. Still, program performance is uneven most current Go programs are much stronger in some aspects of the game, such as local fighting and positional evaluation, than in others. Well known weaknesses of many programs include the handling of several simultaneous fights, including the “two safe groups” problem, and dealing with coexistence in seki. Starting with a review of MCTS techniques, several conjectures regarding the behavior of MCTS-based Go programs in specific types of Go situations are made. Then, an extensive empirical study of ten leading Go programs investigates their performance of two specifically designed test sets containing “two safe group” and seki situations. The results give a good indication of the state of the art in computer Go as of 2012/2013. They show that while a few of the very top programs can apparently solve most of these evaluation problems in their playouts already, these problems are difficult to solve by global search. ∗shihchie@ualberta.ca †mmueller@ualberta.ca",
"title": ""
},
{
"docid": "neg:1840158_15",
"text": "Autonomous household robots are supposed to accomplish complex tasks like cleaning the dishes which involve both navigation and manipulation within the environment. For navigation, spatial information is mostly sufficient, but manipulation tasks raise the demand for deeper knowledge about objects, such as their types, their functions, or the way how they can be used. We present KNOWROB-MAP, a system for building environment models for robots by combining spatial information about objects in the environment with encyclopedic knowledge about the types and properties of objects, with common-sense knowledge describing what the objects can be used for, and with knowledge derived from observations of human activities by learning statistical relational models. In this paper, we describe the concept and implementation of KNOWROB-MAP and present several examples demonstrating the range of information the system can provide to autonomous robots.",
"title": ""
},
{
"docid": "neg:1840158_16",
"text": "Object tracking is still a critical and challenging problem with many applications in computer vision. For this challenge, more and more researchers pay attention to applying deep learning to get powerful feature for better tracking accuracy. In this paper, a novel triplet loss is proposed to extract expressive deep feature for object tracking by adding it into Siamese network framework instead of pairwise loss for training. Without adding any inputs, our approach is able to utilize more elements for training to achieve more powerful feature via the combination of original samples. Furthermore, we propose a theoretical analysis by combining comparison of gradients and back-propagation, to prove the effectiveness of our method. In experiments, we apply the proposed triplet loss for three real-time trackers based on Siamese network. And the results on several popular tracking benchmarks show our variants operate at almost the same frame-rate with baseline trackers and achieve superior tracking performance than them, as well as the comparable accuracy with recent state-of-the-art real-time trackers.",
"title": ""
},
{
"docid": "neg:1840158_17",
"text": "As an essential operation in data cleaning, the similarity join has attracted considerable attention from the database community. In this article, we study string similarity joins with edit-distance constraints, which find similar string pairs from two large sets of strings whose edit distance is within a given threshold. Existing algorithms are efficient either for short strings or for long strings, and there is no algorithm that can efficiently and adaptively support both short strings and long strings. To address this problem, we propose a new filter, called the segment filter. We partition a string into a set of segments and use the segments as a filter to find similar string pairs. We first create inverted indices for the segments. Then for each string, we select some of its substrings, identify the selected substrings from the inverted indices, and take strings on the inverted lists of the found substrings as candidates of this string. Finally, we verify the candidates to generate the final answer. We devise efficient techniques to select substrings and prove that our method can minimize the number of selected substrings. We develop novel pruning techniques to efficiently verify the candidates. We also extend our techniques to support normalized edit distance. Experimental results show that our algorithms are efficient for both short strings and long strings, and outperform state-of-the-art methods on real-world datasets.",
"title": ""
},
{
"docid": "neg:1840158_18",
"text": "An intrinsic part of information extraction is the creation and manipulation of relations extracted from text. In this article, we develop a foundational framework where the central construct is what we call a document spanner (or just spanner for short). A spanner maps an input string into a relation over the spans (intervals specified by bounding indices) of the string. The focus of this article is on the representation of spanners. Conceptually, there are two kinds of such representations. Spanners defined in a primitive representation extract relations directly from the input string; those defined in an algebra apply algebraic operations to the primitively represented spanners. This framework is driven by SystemT, an IBM commercial product for text analysis, where the primitive representation is that of regular expressions with capture variables.\n We define additional types of primitive spanner representations by means of two kinds of automata that assign spans to variables. We prove that the first kind has the same expressive power as regular expressions with capture variables; the second kind expresses precisely the algebra of the regular spanners—the closure of the first kind under standard relational operators. The core spanners extend the regular ones by string-equality selection (an extension used in SystemT). We give some fundamental results on the expressiveness of regular and core spanners. As an example, we prove that regular spanners are closed under difference (and complement), but core spanners are not. Finally, we establish connections with related notions in the literature.",
"title": ""
},
{
"docid": "neg:1840158_19",
"text": "The Pittsburgh Sleep Quality Index (PSQI) is a widely used measure of sleep quality in adolescents, but information regarding its psychometric strengths and weaknesses in this population is limited. In particular, questions remain regarding whether it measures one or two sleep quality domains. The aims of the present study were to (a) adapt the PSQI for use in adolescents and young adults, and (b) evaluate the psychometric properties of the adapted measure in this population. The PSQI was slightly modified to make it more appropriate for use in youth populations and was translated into Spanish for administration to the sample population available to the study investigators. It was then administered with validity criterion measures to a community-based sample of Spanish adolescents and young adults (AYA) between 14 and 24 years old (N = 216). The results indicated that the questionnaire (AYA-PSQI-S) assesses a single factor. The total score evidenced good convergent and divergent validity and moderate reliability (Cronbach's alpha = .72). The AYA-PSQI-S demonstrates adequate psychometric properties for use in clinical trials involving adolescents and young adults. Additional research to further evaluate the reliability and validity of the measure for use in clinical settings is warranted.",
"title": ""
}
] |
1840159 | A clickstream-based collaborative filtering personalization model: towards a better performance | [
{
"docid": "pos:1840159_0",
"text": "Predicting items a user would like on the basis of other users’ ratings for these items has become a well-established strategy adopted by many recommendation services on the Internet. Although this can be seen as a classification problem, algorithms proposed thus far do not draw on results from the machine learning literature. We propose a representation for collaborative filtering tasks that allows the application of virtually any machine learning algorithm. We identify the shortcomings of current collaborative filtering techniques and propose the use of learning algorithms paired with feature extraction techniques that specifically address the limitations of previous approaches. Our best-performing algorithm is based on the singular value decomposition of an initial matrix of user ratings, exploiting latent structure that essentially eliminates the need for users to rate common items in order to become predictors for one another's preferences. We evaluate the proposed algorithm on a large database of user ratings for motion pictures and find that our approach significantly outperforms current collaborative filtering algorithms.",
"title": ""
},
{
"docid": "pos:1840159_1",
"text": "Markov models have been extensively used to model Web users' navigation behaviors on Web sites. The link structure of a Web site can be seen as a citation network. By applying bibliographic co-citation and coupling analysis to a Markov model constructed from a Web log file on a Web site, we propose a clustering algorithm called CitationCluster to cluster conceptually related pages. The clustering results are used to construct a conceptual hierarchy of the Web site. Markov model based link prediction is integrated with the hierarchy to assist users' navigation on the Web site.",
"title": ""
}
] | [
{
"docid": "neg:1840159_0",
"text": "Many studies have investigated factors that affect susceptibility to false memories. However, few have investigated the role of sleep deprivation in the formation of false memories, despite overwhelming evidence that sleep deprivation impairs cognitive function. We examined the relationship between self-reported sleep duration and false memories and the effect of 24 hr of total sleep deprivation on susceptibility to false memories. We found that under certain conditions, sleep deprivation can increase the risk of developing false memories. Specifically, sleep deprivation increased false memories in a misinformation task when participants were sleep deprived during event encoding, but did not have a significant effect when the deprivation occurred after event encoding. These experiments are the first to investigate the effect of sleep deprivation on susceptibility to false memories, which can have dire consequences.",
"title": ""
},
{
"docid": "neg:1840159_1",
"text": "An approach to high field control, particularly in the areas near the high voltage (HV) and ground terminals of an outdoor insulator, is proposed using a nonlinear grading material; Zinc Oxide (ZnO) microvaristors compounded with other polymeric materials to obtain the required properties and allow easy application. The electrical properties of the microvaristor compounds are characterised by a nonlinear field-dependent conductivity. This paper describes the principles of the proposed field-control solution and demonstrates the effectiveness of the proposed approach in controlling the electric field along insulator profiles. A case study is carried out for a typical 11 kV polymeric insulator design to highlight the merits of the grading approach. Analysis of electric potential and field distributions on the insulator surface is described under dry clean and uniformly contaminated surface conditions for both standard and microvaristor-graded insulators. The grading and optimisation principles to allow better performance are investigated to improve the performance of the insulator both under steady state operation and under surge conditions. Furthermore, the dissipated power and associated heat are derived to examine surface heating and losses in the grading regions and for the complete insulator. Preliminary tests on inhouse prototype insulators have confirmed better flashover performance of the proposed graded insulator with a 21 % increase in flashover voltage.",
"title": ""
},
{
"docid": "neg:1840159_2",
"text": "Machine learning’s ability to rapidly evolve to changing and complex situations has helped it become a fundamental tool for computer security. That adaptability is also a vulnerability: attackers can exploit machine learning systems. We present a taxonomy identifying and analyzing attacks against machine learning systems. We show how these classes influence the costs for the attacker and defender, and we give a formal structure defining their interaction. We use our framework to survey and analyze the literature of attacks against machine learning systems. We also illustrate our taxonomy by showing how it can guide attacks against SpamBayes, a popular statistical spam filter. Finally, we discuss how our taxonomy suggests new lines of defenses.",
"title": ""
},
{
"docid": "neg:1840159_3",
"text": "To date, no short scale exists with strong psychometric properties that can assess problematic pornography consumption based on an overarching theoretical background. The goal of the present study was to develop a brief scale, the Problematic Pornography Consumption Scale (PPCS), based on Griffiths's (2005) six-component addiction model that can distinguish between nonproblematic and problematic pornography use. The PPCS was developed using an online sample of 772 respondents (390 females, 382 males; Mage = 22.56, SD = 4.98 years). Creation of items was based on previous problematic pornography use instruments and on the definitions of factors in Griffiths's model. A confirmatory factor analysis (CFA) was carried out-because the scale is based on a well-established theoretical model-leading to an 18-item second-order factor structure. The reliability of the PPCS was excellent, and measurement invariance was established. In the current sample, 3.6% of the users belonged to the at-risk group. Based on sensitivity and specificity analyses, we identified an optimal cutoff to distinguish between problematic and nonproblematic pornography users. The PPCS is a multidimensional scale of problematic pornography use with a strong theoretical basis that also has strong psychometric properties in terms of factor structure and reliability.",
"title": ""
},
{
"docid": "neg:1840159_4",
"text": "In many surveillance applications it is desirable to determine if a given individual has been previously observed over a network of cameras. This is the person reidentification problem. This paper focuses on reidentification algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Person reidentification approaches have two aspects: (i) establish correspondence between parts, and (ii) generate signatures that are invariant to variations in illumination, pose, and the dynamic appearance of clothing. A novel spatiotemporal segmentation algorithm is employed to generate salient edgels that are robust to changes in appearance of clothing. The invariant signatures are generated by combining normalized color and salient edgel histograms. Two approaches are proposed to generate correspondences: (i) a model based approach that fits an articulated model to each individual to establish a correspondence map, and (ii) an interest point operator approach that nominates a large number of potential correspondences which are evaluated using a region growing scheme. Finally, the approaches are evaluated on a 44 person database across 3 disparate views.",
"title": ""
},
{
"docid": "neg:1840159_5",
"text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.",
"title": ""
},
{
"docid": "neg:1840159_6",
"text": "Human Activity Recognition is one of the attractive topics to develop smart interactive environment in which computing systems can understand human activities in natural context. Besides traditional approaches with visual data, inertial sensors in wearable devices provide a promising approach for human activity recognition. In this paper, we propose novel methods to recognize human activities from raw data captured from inertial sensors using convolutional neural networks with either 2D or 3D filters. We also take advantage of hand-crafted features to combine with learned features from Convolution-Pooling blocks to further improve accuracy for activity recognition. Experiments on UCI Human Activity Recognition dataset with six different activities demonstrate that our method can achieve 96.95%, higher than existing methods.",
"title": ""
},
{
"docid": "neg:1840159_7",
"text": "In this paper we describe our implementation of algorithms for face detection and recognition in color images under Matlab. For face detection, we trained a feedforward neural network to perform skin segmentation, followed by the eyes detection, face alignment, lips detection and face delimitation. The eyes were detected by analyzing the chrominance and the angle between neighboring pixels and, then, the results were used to perform face alignment. The lips were detected based on the analysis of the Red color component intensity in the lower face region. Finally, the faces were delimited using the eyes and lips positions. The face recognition involved a classifier that used the standard deviation of the difference between color matrices of the faces to identify the input face. The algorithms were run on Faces 1999 dataset. The proposed method achieved 96.9%, 89% and 94% correct detection rate of face, eyes and lips, respectively. The correctness rate of the face recognition algorithm was 70.7%.",
"title": ""
},
{
"docid": "neg:1840159_8",
"text": "The olfactory system is an essential part of human physiology, with a rich evolutionary history. Although humans are less dependent on chemosensory input than are other mammals (Niimura 2009, Hum. Genomics 4:107-118), olfactory function still plays a critical role in health and behavior. The detection of hazards in the environment, generating feelings of pleasure, promoting adequate nutrition, influencing sexuality, and maintenance of mood are described roles of the olfactory system, while other novel functions are being elucidated. A growing body of evidence has implicated a role for olfaction in such diverse physiologic processes as kin recognition and mating (Jacob et al. 2002a, Nat. Genet. 30:175-179; Horth 2007, Genomics 90:159-175; Havlicek and Roberts 2009, Psychoneuroendocrinology 34:497-512), pheromone detection (Jacob et al. 200b, Horm. Behav. 42:274-283; Wyart et al. 2007, J. Neurosci. 27:1261-1265), mother-infant bonding (Doucet et al. 2009, PLoS One 4:e7579), food preferences (Mennella et al. 2001, Pediatrics 107:E88), central nervous system physiology (Welge-Lüssen 2009, B-ENT 5:129-132), and even longevity (Murphy 2009, JAMA 288:2307-2312). The olfactory system, although phylogenetically ancient, has historically received less attention than other special senses, perhaps due to challenges related to its study in humans. In this article, we review the anatomic pathways of olfaction, from peripheral nasal airflow leading to odorant detection, to epithelial recognition of these odorants and related signal transduction, and finally to central processing. Olfactory dysfunction, which can be defined as conductive, sensorineural, or central (typically related to neurodegenerative disorders), is a clinically significant problem, with a high burden on quality of life that is likely to grow in prevalence due to demographic shifts and increased environmental exposures.",
"title": ""
},
{
"docid": "neg:1840159_9",
"text": "In the last decade, deep learning has contributed to advances in a wide range computer vision tasks including texture analysis. This paper explores a new approach for texture segmentation using deep convolutional neural networks, sharing important ideas with classic filter bank based texture segmentation methods. Several methods are developed to train Fully Convolutional Networks to segment textures in various applications. We show in particular that these networks can learn to recognize and segment a type of texture, e.g. wood and grass from texture recognition datasets (no training segmentation). We demonstrate that Fully Convolutional Networks can learn from repetitive patterns to segment a particular texture from a single image or even a part of an image. We take advantage of these findings to develop a method that is evaluated on a series of supervised and unsupervised experiments and improve the state of the art on the Prague texture segmentation datasets.",
"title": ""
},
{
"docid": "neg:1840159_10",
"text": "This study has devoted much effort to developing an integrated model designed to predict and explain an individual’s continued use of online services based on the concepts of the expectation disconfirmation model and the theory of planned behavior. Empirical data was collected from a field survey of Cyber University System (CUS) users to verify the fit of the hypothetical model. The measurement model indicates the theoretical constructs have adequate reliability and validity while the structured equation model is illustrated as having a high model fit for empirical data. Study’s findings show that a customer’s behavioral intention towards e-service continuance is mainly determined by customer satisfaction and additionally affected by perceived usefulness and subjective norm. Generally speaking, the integrated model can fully reflect the spirit of the expectation disconfirmation model and take advantage of planned behavior theory. After consideration of the impact of systemic features, personal characteristics, and social influence on customer behavior, the integrated model had a better explanatory advantage than other EDM-based models proposed in prior research. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840159_11",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "neg:1840159_12",
"text": "We have been developing the Network Incident analysis Center for Tactical Emergency Response (nicter), whose objective is to detect and identify propagating malwares. The nicter mainly monitors darknet, a set of unused IP addresses, to observe global trends of network threats, while it captures and analyzes malware executables. By correlating the network threats with analysis results of malware, the nicter identifies the root causes (malwares) of the detected network threats. Through a long-term operation of the nicter for more than five years, we have achieved some key findings that would help us to understand the intentions of attackers and the comprehensive threat landscape of the Internet. With a focus on a well-knwon malware, i. e., W32.Downadup, this paper provides some practical case studies with considerations and consequently we could obtain a threat landscape that more than 60% of attacking hosts observed in our dark-net could be infected by W32.Downadup. As an evaluation, we confirmed that the result of the correlation analysis was correct in a rate of 86.18%.",
"title": ""
},
{
"docid": "neg:1840159_13",
"text": "Test-Driven Development (TDD) is an agile practice that is widely accepted and advocated by most agile methods and methodologists. In this paper, we report on a longitudinal case study of an IBM team who has sustained use of TDD for five years and over ten releases of a Java-implemented product. The team worked from a design and wrote tests incrementally before or while they wrote code and, in the process, developed a significant asset of automated tests. The IBM team realized sustained quality improvement relative to a pre-TDD project and consistently had defect density below industry standards. As a result, our data indicate that the TDD practice can aid in the production of high quality products. This quality improvement would compensate for the moderate perceived productivity losses. Additionally, the use of TDD may decrease the degree to which code complexity increases as software ages.",
"title": ""
},
{
"docid": "neg:1840159_14",
"text": "This paper presents a wideband circularly polarized millimeter-wave (mmw) antenna design. We introduce a novel 3-D-printed polarizer, which consists of several air and dielectric slabs to transform the polarization of the antenna radiation from linear to circular. The proposed polarizer is placed above a radiating aperture operating at the center frequency of 60 GHz. An electric field, <inline-formula> <tex-math notation=\"LaTeX\">${E}$ </tex-math></inline-formula>, radiated from the aperture generates two components of electric fields, <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula>. After passing through the polarizer, both <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> fields can be degenerated with an orthogonal phase difference which results in having a wide axial ratio bandwidth. The phase difference between <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {x}}$ </tex-math></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">${E} _{\\mathrm {y}}$ </tex-math></inline-formula> is determined by the incident angle <inline-formula> <tex-math notation=\"LaTeX\">$\\phi $ </tex-math></inline-formula>, of the polarization of the electric field to the polarizer as well as the thickness, <inline-formula> <tex-math notation=\"LaTeX\">${h}$ </tex-math></inline-formula>, of the dielectric slabs. With the help of the thickness of the polarizer, the directivity of the radiation pattern is increased so as to devote high-gain and wideband characteristics to the antenna. To verify our concept, an intensive parametric study and an experiment were carried out. Three antenna sources, including dipole, patch, and aperture antennas, were investigated with the proposed 3-D-printed polarizer. All measured results agree with the theoretical analysis. The proposed antenna with the polarizer achieves a wide impedance bandwidth of 50% from 45 to 75 GHz for the reflection coefficient less than or equal −10 dB, and yields an overlapped axial ratio bandwidth of 30% from 49 to 67 GHz for the axial ratio ≤ 3 dB. The maximum gain of the antenna reaches to 15 dBic. The proposed methodology of this design can apply to applications related to mmw wireless communication systems. The ultimate goal of this paper is to develop a wideband, high-gain, and low-cost antenna for the mmw frequency band.",
"title": ""
},
{
"docid": "neg:1840159_15",
"text": "Video object segmentation is a fundamental step in many advanced vision applications. Most existing algorithms are based on handcrafted features such as HOG, super-pixel segmentation or texturebased techniques, while recently deep features have been found to be more efficient. Existing algorithms observe performance degradation in the presence of challenges such as illumination variations, shadows, and color camouflage. To handle these challenges we propose a fusion based moving object segmentation algorithm which exploits color as well as depth information using GAN to achieve more accuracy. Our goal is to segment moving objects in the presence of challenging background scenes, in real environments. To address this problem, GAN is trained in an unsupervised manner on color and depth information independently with challenging video sequences. During testing, the trained GAN generates backgrounds similar to that in the test sample. The generated background samples are then compared with the test sample to segment moving objects. The final result is computed by fusion of object boundaries in both modalities, RGB and the depth. The comparison of our proposed algorithm with five state-of-the-art methods on publicly available dataset has shown the strength of our algorithm for moving object segmentation in videos in the presence of challenging real scenarios.",
"title": ""
},
{
"docid": "neg:1840159_16",
"text": "Network throughput can be increased by allowing multipath, adaptive routing. Adaptive routing allows more freedom in the paths taken by messages, spreading load over physical channels more evenly. The flexibility of adaptive routing introduces new possibilities of deadlock. Previous deadlock avoidance schemes in k-ary n-cubes require an exponential number of virtual channels, independent of network size and dimension. Planar adaptive routing algorithms reduce the complexity of deadlock prevention by reducing the number of choices at each routing step. In the fault-free case, planar-adaptive networks are guaranteed to be deadlock-free. In the presence of network faults, the planar-adaptive router can be extended with misrouting to produce a working network which remains provably deadlock free and is provably livelock free. In addition, planar adaptive networks can simultaneously support both in-order and adaptive, out-of-order packet delivery.\nPlanar-adaptive routing is of practical significance. It provides the simplest known support for deadlock-free adaptive routing in k-ary n-cubes of more than two dimensions (with k > 2). Restricting adaptivity reduces the hardware complexity, improving router speed or allowing additional performance-enhancing network features. The structure of planar-adaptive routers is amenable to efficient implementation.",
"title": ""
},
{
"docid": "neg:1840159_17",
"text": "AIM\nThe purpose of this review is to represent acids that can be used as surface etchant before adhesive luting of ceramic restorations, placement of orthodontic brackets or repair of chipped porcelain restorations. Chemical reactions, application protocol, and etching effect are presented as well.\n\n\nSTUDY SELECTION\nAvailable scientific articles published in PubMed and Scopus literature databases, scientific reports and manufacturers' instructions and product information from internet websites, written in English, using following search terms: \"acid etching, ceramic surface treatment, hydrofluoric acid, acidulated phosphate fluoride, ammonium hydrogen bifluoride\", have been reviewed.\n\n\nRESULTS\nThere are several acids with fluoride ion in their composition that can be used as ceramic surface etchants. The etching effect depends on the acid type and its concentration, etching time, as well as ceramic type. The most effective etching pattern is achieved when using hydrofluoric acid; the numerous micropores and channels of different sizes, honeycomb-like appearance, extruded crystals or scattered irregular ceramic particles, depending on the ceramic type, have been detected on the etched surfaces.\n\n\nCONCLUSION\nAcid etching of the bonding surface of glass - ceramic restorations is considered as the most effective treatment method that provides a reliable bond with composite cement. Selective removing of the glassy matrix of silicate ceramics results in a micromorphological three-dimensional porous surface that allows micromechanical interlocking of the luting composite.",
"title": ""
},
{
"docid": "neg:1840159_18",
"text": "A formalism is presented for computing and organizing actions for autonomous agents in dynamic environments. We introduce the notion of teleo-reactive (T-R) programs whose execution entails the construction of circuitry for the continuous computation of the parameters and conditions on which agent action is based. In addition to continuous feedback, T-R programs support parameter binding and recursion. A primary di erence between T-R programs and many other circuit-based systems is that the circuitry of T-R programs is more compact; it is constructed at run time and thus does not have to anticipate all the contingencies that might arise over all possible runs. In addition, T-R programs are intuitive and easy to write and are written in a form that is compatible with automatic planning and learning methods. We brie y describe some experimental applications of T-R programs in the control of simulated and actual mobile robots.",
"title": ""
},
{
"docid": "neg:1840159_19",
"text": "In this work, we propose a novel way to consider the clustering and the reduction of the dimension simultaneously. Indeed, our approach takes advantage of the mutual reinforcement between data reduction and clustering tasks. The use of a low-dimensional representation can be of help in providing simpler and more interpretable solutions. We show that by doing so, our model is able to better approximate the relaxed continuous dimension reduction solution by the true discrete clustering solution. Experiment results show that our method gives better results in terms of clustering than the state-of-the-art algorithms devoted to similar tasks for data sets with different proprieties.",
"title": ""
}
] |
1840160 | Data Mining Model for Predicting Student Enrolment in STEM Courses in Higher Education Institutions | [
{
"docid": "pos:1840160_0",
"text": "This paper explores the socio-demographic variables (age, gender, ethnicity, education, work status, and disability) and study environment (course programme and course block), that may influence persistence or dropout of students at the Open Polytechnic of New Zealand. We examine to what extent these factors, i.e. enrolment data help us in pre-identifying successful and unsuccessful students. The data stored in the Open Polytechnic student management system from 2006 to 2009, covering over 450 students who enrolled to 71150 Information Systems course was used to perform a quantitative analysis of study outcome. Based on a data mining techniques (such as feature selection and classification trees), the most important factors for student success and a profile of the typical successful and unsuccessful students are identified. The empirical results show the following: (i) the most important factors separating successful from unsuccessful students are: ethnicity, course programme and course block; (ii) among classification tree growing methods Classification and Regression Tree (CART) was the most successful in growing the tree with an overall percentage of correct classification of 60.5%; and (iii) both the risk estimated by the cross-validation and the gain diagram suggests that all trees, based only on enrolment data are not quite good in separating successful from unsuccessful students. The implications of these results for academic and administrative staff are discussed.",
"title": ""
},
{
"docid": "pos:1840160_1",
"text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom",
"title": ""
},
{
"docid": "pos:1840160_2",
"text": "Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naïve Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.",
"title": ""
}
] | [
{
"docid": "neg:1840160_0",
"text": "It's natural to promote your best and brightest, especially when you think they may leave for greener pastures if you don't continually offer them new challenges and rewards. But promoting smart, ambitious young managers too quickly often robs them of the chance to develop the emotional competencies that come with time and experience--competencies like the ability to negotiate with peers, regulate emotions in times of crisis, and win support for change. Indeed, at some point in a manager's career--usually at the vice president level--raw talent and ambition become less important than the ability to influence and persuade, and that's the point at which the emotionally immature manager will lose his effectiveness. This article argues that delaying a promotion can sometimes be the best thing a senior executive can do for a junior manager. The inexperienced manager who is given time to develop his emotional competencies may be better prepared for the interpersonal demands of top-level leadership. The authors recommend that senior executives employ these strategies to help boost their protégés' people skills: sharpen the 360-degree feedback process, give managers cross-functional assignments to improve their negotiation skills, make the development of emotional competencies mandatory, make emotional competencies a performance measure, and encourage managers to develop informal learning partnerships with peers and mentors. Delaying a promotion can be difficult given the steadfast ambitions of many junior executives and the hectic pace of organizational life. It may mean going against the norm of promoting people almost exclusively on smarts and business results. It may also mean contending with the disappointment of an esteemed subordinate. But taking the time to build people's emotional competencies isn't an extravagance; it's critical to developing effective leaders.",
"title": ""
},
{
"docid": "neg:1840160_1",
"text": "The quantity and complexity of available information is rapidly increasing. This potential information overload challenges the standard information retrieval models, as users find it increasingly difficult to find relevant information. We therefore propose a method that can utilize the potentially valuable knowledge contained in concept models such as ontologies, and thereby assist users in querying, using the terminology of the domain. The primary focus of this dissertation is similarity measures for use in ontology-based information retrieval. We aim at incorporating the information contained in ontologies by choosing a representation formalism where queries and objects in the information base are described using a lattice-algebraic concept language containing expressions that can be directly mapped into the ontology. Similarity between the description of the query and descriptions of the objects is calculated based on a nearness principle derived from the structure and relations of the ontology. This measure is then used to perform ontology-based query expansion. By doing so, we can replace semantic matching from direct reasoning over the ontology with numerical similarity calculation by means of a general aggregation principle The choice of the proposed similarity measure is guided by a set of properties aimed at ensuring the measures accordance with a set of distinctive structural qualities derived from the ontology. We furthermore empirically evaluate the proposed similarity measure by comparing the similarity ratings for pairs of concepts produced by the proposed measure, with the mean similarity ratings produced by humans for the same pairs.",
"title": ""
},
{
"docid": "neg:1840160_2",
"text": "Although prior research has examined how individual difference factors are related to relationship initiation and formation over the Internet (e.g., online dating sites, social networking sites), little research has examined how dispositional factors are related to other aspects of online dating. The present research therefore sought to examine the relationship between several dispositional factors, such as Big-Five personality traits, self-esteem, rejection sensitivity, and attachment styles, and the use of online dating sites and online dating behaviors. Rejection sensitivity was the only dispositional variable predictive of use of online dating sites whereby those higher in rejection sensitivity are more likely to use online dating sites than those lower in rejection sensitivity. We also found that those higher in rejection sensitivity, those lower in conscientiousness, and men indicated being more likely to engage in potentially risky behaviors related to meeting an online dating partner face-to-face. Further research is needed to further explore the relationships between these dispositional factors and online dating behaviors. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840160_3",
"text": "The undergraduate computer science curriculum is generally focused on skills and tools; most students are not exposed to much research in the field, and do not learn how to navigate the research literature. We describe how science fiction reviews were used as a gateway to research reviews. Students learn a little about current or recent research on a topic that stirs their imagination, and learn how to search for, read critically, and compare technical papers on a topic related their chosen science fiction book, movie, or TV show.",
"title": ""
},
{
"docid": "neg:1840160_4",
"text": "Increased neuromuscular excitability with varying clinical and EMG features were also observed during KCl administration in both cases. The findings are discussed on the light of the membrane ionic gradients current theory.",
"title": ""
},
{
"docid": "neg:1840160_5",
"text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.",
"title": ""
},
{
"docid": "neg:1840160_6",
"text": "Current research on culture focuses on independence and interdependence and documents numerous East-West psychological differences, with an increasing emphasis placed on cognitive mediating mechanisms. Lost in this literature is a time-honored idea of culture as a collective process composed of cross-generationally transmitted values and associated behavioral patterns (i.e., practices). A new model of neuro-culture interaction proposed here addresses this conceptual gap by hypothesizing that the brain serves as a crucial site that accumulates effects of cultural experience, insofar as neural connectivity is likely modified through sustained engagement in cultural practices. Thus, culture is \"embrained,\" and moreover, this process requires no cognitive mediation. The model is supported in a review of empirical evidence regarding (a) collective-level factors involved in both production and adoption of cultural values and practices and (b) neural changes that result from engagement in cultural practices. Future directions of research on culture, mind, and the brain are discussed.",
"title": ""
},
{
"docid": "neg:1840160_7",
"text": "Most sentiment analysis approaches use as baseline a support vector machines (SVM) classifier with binary unigram weights. In this paper, we explore whether more sophisticated feature weighting schemes from Information Retrieval can enhance classification accuracy. We show that variants of the classictf.idf scheme adapted to sentiment analysis provide significant increases in accuracy, especially when using a sublinear function for term frequency weights and document frequency smoothing. The techniques are tested on a wide selection of data sets and produce the best accuracy to our knowledge.",
"title": ""
},
{
"docid": "neg:1840160_8",
"text": "In this paper, a double-axis planar micro-fluxgate magnetic sensor and its front-end circuitry are presented. The ferromagnetic core material, i.e., the Vitrovac 6025 X, has been deposited on top of the coils with the dc-magnetron sputtering technique, which is a new type of procedure with respect to the existing solutions in the field of fluxgate sensors. This procedure allows us to obtain a core with the good magnetic properties of an amorphous ferromagnetic material, which is typical of a core with 25-mum thickness, but with a thickness of only 1 mum, which is typical of an electrodeposited core. The micro-Fluxgate has been realized in a 0.5- mum CMOS process using copper metal lines to realize the excitation coil and aluminum metal lines for the sensing coil, whereas the integrated interface circuitry for exciting and reading out the sensor has been realized in a 0.35-mum CMOS technology. Applying a triangular excitation current of 18 mA peak at 100 kHz, the magnetic sensitivity achieved is about 10 LSB/muT [using a 13-bit analog-to-digital converter (ADC)], which is suitable for detecting the Earth's magnetic field (plusmn60 muT), whereas the linearity error is 3% of the full scale. The maximum angle error of the sensor evaluating the Earth magnetic field is 2deg. The power consumption of the sensor is about 13.7 mW. The total power consumption of the system is about 90 mW.",
"title": ""
},
{
"docid": "neg:1840160_9",
"text": "INTRODUCTION: Back in time dentists used to place implants in locations with sufficient bone-dimensions only, with less regard to placement of final definitive restoration but most of the times, the placement of implant is not as accurate as intended and even a minor variation in comparison to ideal placement causes difficulties in fabrication of final prosthesis. The use of bone substitutes and membranes is now one of the standard therapeutic approaches. In order to accelerate healing of bone graft over the bony defect, numerous techniques utilizing platelet and fibrinogen concentrates have been introduced in the literature.. OBJECTIVES: This study was designed to evaluate the efficacy of using Autologous Concentrated Growth Factors (CGF) Enriched Bone Graft Matrix (Sticky Bone) and CGF-Enriched Fibrin Membrane in management of dehiscence defect around dental implant in narrow maxillary anterior ridge. MATERIALS AND METHODS: Eleven DIO implants were inserted in six adult patients presenting an upper alveolar ridge width of less than 4mm determined by cone beam computed tomogeraphy (CBCT). After implant placement, the resultant vertical labial dehiscence defect was augmented utilizing Sticky Bone and CGF-Enriched Fibrin Membrane. Three CBCTs were made, pre-operatively, immediately postoperatively and six-months post-operatively. The change in vertical defect size was calculated radiographically then statistically analyzed. RESULTS: Vertical dehiscence defect was sufficiently recovered in 5 implant-sites while in the other 6 sites it was decreased to mean value of 1.25 mm ± 0.69 SD, i.e the defect coverage in 6 implants occurred with mean value of 4.59 mm ±0.49 SD. Also the results of the present study showed that the mean of average implant stability was 59.89 mm ± 3.92 CONCLUSIONS: The combination of PRF mixed with CGF with bone graft (allograft) can increase the quality (density) of the newly formed bone and enhance the rate of new bone formation.",
"title": ""
},
{
"docid": "neg:1840160_10",
"text": "Insulin resistance has long been associated with obesity. More than 40 years ago, Randle and colleagues postulated that lipids impaired insulin-stimulated glucose use by muscles through inhibition of glycolysis at key points. However, work over the past two decades has shown that lipid-induced insulin resistance in skeletal muscle stems from defects in insulin-stimulated glucose transport activity. The steatotic liver is also resistant to insulin in terms of inhibition of hepatic glucose production and stimulation of glycogen synthesis. In muscle and liver, the intracellular accumulation of lipids-namely, diacylglycerol-triggers activation of novel protein kinases C with subsequent impairments in insulin signalling. This unifying hypothesis accounts for the mechanism of insulin resistance in obesity, type 2 diabetes, lipodystrophy, and ageing; and the insulin-sensitising effects of thiazolidinediones.",
"title": ""
},
{
"docid": "neg:1840160_11",
"text": "We have applied a Long Short-Term Memory neural network to model S&P 500 volatility, incorporating Google domestic trends as indicators of the public mood and macroeconomic factors. In a held-out test set, our Long Short-Term Memory model gives a mean absolute percentage error of 24.2%, outperforming linear Ridge/Lasso and autoregressive GARCH benchmarks by at least 31%. This evaluation is based on an optimal observation and normalization scheme which maximizes the mutual information between domestic trends and daily volatility in the training set. Our preliminary investigation shows strong promise for better predicting stock behavior via deep learning and neural network models.",
"title": ""
},
{
"docid": "neg:1840160_12",
"text": "BACKGROUND\nWith the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics.\n\n\nDESCRIPTION\nThis paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications.\n\n\nCONCLUSION\nTopic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.",
"title": ""
},
{
"docid": "neg:1840160_13",
"text": "The output voltage derivative term associated with a PID controller injects significant noise in a dc-dc converter. This is mainly due to the parasitic resistance and inductance of the output capacitor. Particularly, during a large-signal transient, noise injection significantly degrades phase margin. Although noise characteristics can be improved by reducing the cutoff frequency of the low-pass filter associated with the voltage derivative, this degrades the closed-loop bandwidth. A formulation of a PID controller is introduced to replace the output voltage derivative with information about the capacitor current, thus reducing noise injection. It is shown that this formulation preserves the fundamental principle of a PID controller and incorporates a load current feedforward, as well as inductor current dynamics. This can be helpful to further improve bandwidth and phase margin. The proposed method is shown to be equivalent to a voltage-mode-controlled buck converter and a current-mode-controlled boost converter with a PID controller in the voltage feedback loop. A buck converter prototype is tested, and the proposed algorithm is implemented using a field-programmable gate array.",
"title": ""
},
{
"docid": "neg:1840160_14",
"text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.",
"title": ""
},
{
"docid": "neg:1840160_15",
"text": "Tissue engineering aims to improve the function of diseased or damaged organs by creating biological substitutes. To fabricate a functional tissue, the engineered construct should mimic the physiological environment including its structural, topographical, and mechanical properties. Moreover, the construct should facilitate nutrients and oxygen diffusion as well as removal of metabolic waste during tissue regeneration. In the last decade, fiber-based techniques such as weaving, knitting, braiding, as well as electrospinning, and direct writing have emerged as promising platforms for making 3D tissue constructs that can address the abovementioned challenges. Here, we critically review the techniques used to form cell-free and cell-laden fibers and to assemble them into scaffolds. We compare their mechanical properties, morphological features and biological activity. We discuss current challenges and future opportunities of fiber-based tissue engineering (FBTE) for use in research and clinical practice.",
"title": ""
},
{
"docid": "neg:1840160_16",
"text": "Cluster identification in large scale information network is a highly attractive issue in the network knowledge mining. Traditionally, community detection algorithms are designed to cluster object population based on minimizing the cutting edge number. Recently, researchers proposed the concept of higher-order clustering framework to segment network objects under the higher-order connectivity patterns. However, the essences of the numerous methodologies are focusing on mining the homogeneous networks to identify groups of objects which are closely related to each other, indicating that they ignore the heterogeneity of different types of objects and links in the networks. In this study, we propose an integrated framework of heterogeneous information network structure and higher-order clustering for mining the hidden relationship, which include three major steps: (1) Construct the heterogeneous network, (2) Convert HIN to Homogeneous network, and (3) Community detection.",
"title": ""
},
{
"docid": "neg:1840160_17",
"text": "Human pose estimation is a well-known computer vision problem that receives intensive research interest. The reason for such interest is the wide range of applications that the successful estimation of human pose offers. Articulated pose estimation includes real time acquisition, analysis, processing and understanding of high dimensional visual information. Ensemble learning methods operating on hand-engineered features have been commonly used for addressing this task. Deep learning exploits representation learning methods to learn multiple levels of representations from raw input data, alleviating the need to hand-crafted features. Deep convolutional neural networks are achieving the state-of-the-art in visual object recognition, localization, detection. In this paper, the pose estimation task is formulated as an offset joint regression problem. The 3D joints positions are accurately detected from a single raw depth image using a deep convolutional neural networks model. The presented method relies on the utilization of the state-of-the-art data generation pipeline to generate large, realistic, and highly varied synthetic set of training images. Analysis and experimental results demonstrate the generalization performance and the real time successful application of the proposed method.",
"title": ""
},
{
"docid": "neg:1840160_18",
"text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.",
"title": ""
},
{
"docid": "neg:1840160_19",
"text": "Michelangelo (1475-1564) had a life-long interest in anatomy that began with his participation in public dissections in his early teens, when he joined the court of Lorenzo de' Medici and was exposed to its physician-philosopher members. By the age of 18, he began to perform his own dissections. His early anatomic interests were revived later in life when he aspired to publish a book on anatomy for artists and to collaborate in the illustration of a medical anatomy text that was being prepared by the Paduan anatomist Realdo Colombo (1516-1559). His relationship with Colombo likely began when Colombo diagnosed and treated him for nephrolithiasis in 1549. He seems to have developed gouty arthritis in 1555, making the possibility of uric acid stones a distinct probability. Recurrent urinary stones until the end of his life are well documented in his correspondence, and available documents imply that he may have suffered from nephrolithiasis earlier in life. His terminal illness with symptoms of fluid overload suggests that he may have sustained obstructive nephropathy. That this may account for his interest in kidney function is evident in his poetry and drawings. Most impressive in this regard is the mantle of the Creator in his painting of the Separation of Land and Water in the Sistine Ceiling, which is in the shape of a bisected right kidney. His use of the renal outline in a scene representing the separation of solids (Land) from liquid (Water) suggests that Michelangelo was likely familiar with the anatomy and function of the kidney as it was understood at the time.",
"title": ""
}
] |
1840161 | Map-supervised road detection | [
{
"docid": "pos:1840161_0",
"text": "In this paper, we propose to fuse the LIDAR and monocular image in the framework of conditional random field to detect the road robustly in challenging scenarios. LIDAR points are aligned with pixels in image by cross calibration. Then boosted decision tree based classifiers are trained for image and point cloud respectively. The scores of the two kinds of classifiers are treated as the unary potentials of the corresponding pixel nodes of the random field. The fused conditional random field can be solved efficiently with graph cut. Extensive experiments tested on KITTI-Road benchmark show that our method reaches the state-of-the-art.",
"title": ""
},
{
"docid": "pos:1840161_1",
"text": "The majority of current image-based road following algorithms operate, at least in part, by assuming the presence of structural or visual cues unique to the roadway. As a result, these algorithms are poorly suited to the task of tracking unstructured roads typical in desert environments. In this paper, we propose a road following algorithm that operates in a selfsupervised learning regime, allowing it to adapt to changing road conditions while making no assumptions about the general structure or appearance of the road surface. An application of optical flow techniques, paired with one-dimensional template matching, allows identification of regions in the current camera image that closely resemble the learned appearance of the road in the recent past. The algorithm assumes the vehicle lies on the road in order to form templates of the road’s appearance. A dynamic programming variant is then applied to optimize the 1-D template match results while enforcing a constraint on the maximum road curvature expected. Algorithm output images, as well as quantitative results, are presented for three distinct road types encountered in actual driving video acquired in the California Mojave Desert.",
"title": ""
}
] | [
{
"docid": "neg:1840161_0",
"text": "Collaborative filtering, a widely-used user-centric recommendation technique, predicts an item’s rating by aggregating its ratings from similar users. User similarity is usually calculated by cosine similarity or Pearson correlation coefficient. However, both of them consider only the direction of rating vectors, and suffer from a range of drawbacks. To solve these issues, we propose a novel Bayesian similarity measure based on the Dirichlet distribution, taking into consideration both the direction and length of rating vectors. Further, our principled method reduces correlation due to chance. Experimental results on six real-world data sets show that our method achieves superior accuracy.",
"title": ""
},
{
"docid": "neg:1840161_1",
"text": "In this paper, new dense dielectric (DD) patch array antenna prototype operating at 28 GHz for the future fifth generation (5G) short-range wireless communications applications is presented. This array antenna is proposed and designed with a standard printed circuit board (PCB) process to be suitable for integration with radio-frequency/microwave circuitry. The proposed structure employs four circular shaped DD patch radiator antenna elements fed by a l-to-4 Wilkinson power divider surrounded by an electromagnetic bandgap (EBG) structure. The DD patch shows better radiation and total efficiencies compared with the metallic patch radiator. For further gain improvement, a dielectric layer of a superstrate is applied above the array antenna. The calculated impedance bandwidth of proposed array antenna ranges from 27.1 GHz to 29.5 GHz for reflection coefficient (Sn) less than -1OdB. The proposed design exhibits good stable radiation patterns over the whole frequency band of interest with a total realized gain more than 16 dBi. Due to the remarkable performance of the proposed array, it can be considered as a good candidate for 5G communication applications.",
"title": ""
},
{
"docid": "neg:1840161_2",
"text": "Entheses are sites where tendons, ligaments, joint capsules or fascia attach to bone. Inflammation of the entheses (enthesitis) is a well-known hallmark of spondyloarthritis (SpA). As entheses are associated with adjacent, functionally related structures, the concepts of an enthesis organ and functional entheses have been proposed. This is important in interpreting imaging findings in entheseal-related diseases. Conventional radiographs and CT are able to depict the chronic changes associated with enthesitis but are of very limited use in early disease. In contrast, MRI is sensitive for detecting early signs of enthesitis and can evaluate both soft-tissue changes and intraosseous abnormalities of active enthesitis. It is therefore useful for the early diagnosis of enthesitis-related arthropathies and monitoring therapy. Current knowledge and typical MRI features of the most commonly involved entheses of the appendicular skeleton in patients with SpA are reviewed. The MRI appearances of inflammatory and degenerative enthesopathy are described. New options for imaging enthesitis, including whole-body MRI and high-resolution microscopy MRI, are briefly discussed.",
"title": ""
},
{
"docid": "neg:1840161_3",
"text": "Each July since 2003, the author has directed summer camps that introduce middle school boys and girls to the basic ideas of computer programming. Prior to 2009, the author used Alice 2.0 to introduce object-based computing. In 2009, the author decided to offer these camps using Scratch, primarily to engage repeat campers but also for variety. This paper provides a detailed overview of this outreach, and documents its success at providing middle school girls with a positive, engaging computing experience. It also discusses the merits of Alice and Scratch for such outreach efforts; and the use of these visually oriented programs by students with disabilities, including blind students.",
"title": ""
},
{
"docid": "neg:1840161_4",
"text": "Introduction The literature on business process re-engineering, benchmarking, continuous improvement and many other approaches of modern management is very abundant. One thing which is noticeable, however, is the growing usage of the word “process” in everyday business language. This suggests that most organizations adopt a process-based approach to managing their operations and that business process management (BPM) is a well-established concept. Is this really what takes place? On examination of the literature which refers to BPM, it soon emerged that the use of this concept is not really pervasive and what in fact has been acknowledged hitherto as prevalent business practice is no more than structural changes, the use of systems such as EN ISO 9000 and the management of individual projects.",
"title": ""
},
{
"docid": "neg:1840161_5",
"text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.",
"title": ""
},
{
"docid": "neg:1840161_6",
"text": "Digital investigations of the real world through point clouds and derivatives are changing how curators, cultural heritage researchers and archaeologists work and collaborate. To progressively aggregate expertise and enhance the working proficiency of all professionals, virtual reconstructions demand adapted tools to facilitate knowledge dissemination. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. In this paper, we review the state of the art of point cloud integration within archaeological applications, giving an overview of 3D technologies for heritage, digital exploitation and case studies showing the assimilation status within 3D GIS. Identified issues and new perspectives are addressed through a knowledge-based point cloud processing framework for multi-sensory data, and illustrated on mosaics and quasi-planar objects. A new acquisition, pre-processing, segmentation and ontology-based classification method on hybrid point clouds from both terrestrial laser scanning and dense image matching is proposed to enable reasoning for information extraction. Experiments in detection and semantic enrichment show promising results of 94% correct semantization. Then, we integrate the metadata in an archaeological smart point cloud data structure allowing spatio-semantic queries related to CIDOC-CRM. Finally, a WebGL prototype is presented that leads to efficient communication between actors by proposing optimal 3D data visualizations as a basis on which interaction can grow.",
"title": ""
},
{
"docid": "neg:1840161_7",
"text": "This document includes supplementary material for the semi-supervised approach towards framesemantic parsing for unknown predicates (Das and Smith, 2011). We include the names of the test documents used in the study, plot the results for framesemantic parsing while varying the hyperparameter that is used to determine the number of top frames to be selected from the posterior distribution over each target of a constructed graph and argue why the semi-supervised self-training baseline did not perform well on the task.",
"title": ""
},
{
"docid": "neg:1840161_8",
"text": "Many grid connected power electronic systems, such as STATCOMs, UPFCs, and distributed generation system interfaces, use a voltage source inverter (VSI) connected to the supply network through a filter. This filter, typically a series inductance, acts to reduce the switching harmonics entering the distribution network. An alternative filter is a LCL network, which can achieve reduced levels of harmonic distortion at lower switching frequencies and with less inductance, and therefore has potential benefits for higher power applications. However, systems incorporating LCL filters require more complex control strategies and are not commonly presented in literature. This paper proposes a robust strategy for regulating the grid current entering a distribution network from a three-phase VSI system connected via a LCL filter. The strategy integrates an outer loop grid current regulator with inner capacitor current regulation to stabilize the system. A synchronous frame PI current regulation strategy is used for the outer grid current control loop. Linear analysis, simulation, and experimental results are used to verify the stability of the control algorithm across a range of operating conditions. Finally, expressions for “harmonic impedance” of the system are derived to study the effects of supply voltage distortion on the harmonic performance of the system.",
"title": ""
},
{
"docid": "neg:1840161_9",
"text": "This work presents a Brain-Computer Interface (BCI) based on the Steady-State Visual Evoked Potential (SSVEP) that can discriminate four classes once per second. A statistical test is used to extract the evoked response and a decision tree is used to discriminate the stimulus frequency. Designed according such approach, volunteers were capable to online operate a BCI with hit rates varying from 60% to 100%. Moreover, one of the volunteers could guide a robotic wheelchair through an indoor environment using such BCI. As an additional feature, such BCI incorporates a visual feedback, which is essential for improving the performance of the whole system. All of this aspects allow to use this BCI to command a robotic wheelchair efficiently.",
"title": ""
},
{
"docid": "neg:1840161_10",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "neg:1840161_11",
"text": "Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible.",
"title": ""
},
{
"docid": "neg:1840161_12",
"text": "This study assessed the validity of the Balance Scale by examining: how Scale scores related to clinical judgements and self-perceptions of balance, laboratory measures of postural sway and external criteria reflecting balancing ability; if scores could predict falls in the elderly; and how they related to motor and functional performance in stroke patients. Elderly residents (N = 113) were assessed for functional performance and balance regularly over a nine-month period. Occurrence of falls was monitored for a year. Acute stroke patients (N = 70) were periodically rated for functional independence, motor performance and balance for over three months. Thirty-one elderly subjects were assessed by clinical and laboratory indicators reflecting balancing ability. The Scale correlated moderately with caregiver ratings, self-ratings and laboratory measures of sway. Differences in mean Scale scores were consistent with the use of mobility aids by elderly residents and differentiated stroke patients by location of follow-up. Balance scores predicted the occurrence of multiple falls among elderly residents and were strongly correlated with functional and motor performance in stroke patients.",
"title": ""
},
{
"docid": "neg:1840161_13",
"text": "This paper describes an efficient technique for com' puting a hierarchical representation of the objects contained in a complex 3 0 scene. First, an adjacency graph keeping the costs of grouping the different pairs of objects in the scene is built. Then the minimum spanning tree (MST) of that graph is determined. A binary clustering tree (BCT) is obtained from the MS'I: Finally, a merging stage joins the adjacent nodes in the BCT which have similar costs. The final result is an n-ary tree which defines an intuitive clustering of the objects of the scene at different levels of abstraction. Experimental results with synthetic 3 0 scenes are presented.",
"title": ""
},
{
"docid": "neg:1840161_14",
"text": "Many information systems involve data about people. In order to reliably associate data with particular individuals, it is necessary that an effective and efficient identification scheme be established and maintained. There is remarkably little in the information technology literature concerning human identification. This paper seeks to overcome that deficiency, by undertaking a survey of human identity and human identification. The techniques discussed include names, codes, knowledge-based and token-based id, and biometrics. The key challenge to management is identified as being to devise a scheme which is practicable and economic, and of sufficiently high integrity to address the risks the organisation confronts in its dealings with people. It is proposed that much greater use be made of schemes which are designed to afford people anonymity, or enable them to use multiple identities or pseudonyms, while at the same time protecting the organisation's own interests. Multi-purpose and inhabitant registration schemes are described, and the recurrence of proposals to implement and extent them is noted. Public policy issues are identified. Of especial concern is the threat to personal privacy that the general-purpose use of an inhabitant registrant scheme represents. It is speculated that, where such schemes are pursued energetically, the reaction may be strong enough to threaten the social fabric.",
"title": ""
},
{
"docid": "neg:1840161_15",
"text": "Introducing a new hobby for other people may inspire them to join with you. Reading, as one of mutual hobby, is considered as the very easy hobby to do. But, many people are not interested in this hobby. Why? Boring is the reason of why. However, this feel actually can deal with the book and time of you reading. Yeah, one that we will refer to break the boredom in reading is choosing design of experiments statistical principles of research design and analysis as the reading material.",
"title": ""
},
{
"docid": "neg:1840161_16",
"text": "As people document more of their lives online, some recent systems are encouraging people to later revisit those recordings, a practice we're calling technology-mediated reflection (TMR). Since we know that unmediated reflection benefits psychological well-being, we explored whether and how TMR affects well-being. We built Echo, a smartphone application for recording everyday experiences and reflecting on them later. We conducted three system deployments with 44 users who generated over 12,000 recordings and reflections. We found that TMR improves well-being as assessed by four psychological metrics. By analyzing the content of these entries we discovered two mechanisms that explain this improvement. We also report benefits of very long-term TMR.",
"title": ""
},
{
"docid": "neg:1840161_17",
"text": "The paper syncretizes the fundamental concept of the Sea Computing model in Internet of Things and the routing protocol of the wireless sensor network, and proposes a new routing protocol CASCR (Context-Awareness in Sea Computing Routing Protocol) for Internet of Things, based on context-awareness which belongs to the key technologies of Internet of Things. Furthermore, the paper describes the details on the protocol in the work flow, data structure and quantitative algorithm and so on. Finally, the simulation is given to analyze the work performance of the protocol CASCR. Theoretical analysis and experiment verify that CASCR has higher energy efficient and longer lifetime than the congeneric protocols. The paper enriches the theoretical foundation and makes some contribution for wireless sensor network transiting to Internet of Things in this research phase.",
"title": ""
},
{
"docid": "neg:1840161_18",
"text": "This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification credible or not credible clients. Because the real probability of default is unknown, this study presented the novel ‘‘Sorting Smoothing Method” to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840161_19",
"text": "It is a difficult task to classify images with multiple class labels using only a small number of labeled examples, especially when the label (class) distribution is imbalanced. Emotion classification is such an example of imbalanced label distribution, because some classes of emotions like disgusted are relatively rare comparing to other labels like happy or sad. In this paper, we propose a data augmentation method using generative adversarial networks (GAN). It can complement and complete the data manifold and find better margins between neighboring classes. Specifically, we design a framework using a CNN model as the classifier and a cycle-consistent adversarial networks (CycleGAN) as the generator. In order to avoid gradient vanishing problem, we employ the least-squared loss as adversarial loss. We also propose several evaluation methods on three benchmark datasets to validate GAN’s performance. Empirical results show that we can obtain 5%∼10% increase in the classification accuracy after employing the GAN-based data augmentation techniques.",
"title": ""
}
] |
1840162 | Airwriting: Hands-Free Mobile Text Input by Spotting and Continuous Recognition of 3d-Space Handwriting with Inertial Sensors | [
{
"docid": "pos:1840162_0",
"text": "We present a method for spotting sporadically occurring gestures in a continuous data stream from body-worn inertial sensors. Our method is based on a natural partitioning of continuous sensor signals and uses a two-stage approach for the spotting task. In a first stage, signal sections likely to contain specific motion events are preselected using a simple similarity search. Those preselected sections are then further classified in a second stage, exploiting the recognition capabilities of hidden Markov models. Based on two case studies, we discuss implementation details of our approach and show that it is a feasible strategy for the spotting of various types of motion events. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "pos:1840162_1",
"text": "Until recently the weight and size of inertial sensors has prohibited their use in domains such as human motion capture. Recent improvements in the performance of small and lightweight micromachined electromechanical systems (MEMS) inertial sensors have made the application of inertial techniques to such problems possible. This has resulted in an increased interest in the topic of inertial navigation, however current introductions to the subject fail to sufficiently describe the error characteristics of inertial systems. We introduce inertial navigation, focusing on strapdown systems based on MEMS devices. A combination of measurement and simulation is used to explore the error characteristics of such systems. For a simple inertial navigation system (INS) based on the Xsens Mtx inertial measurement unit (IMU), we show that the average error in position grows to over 150 m after 60 seconds of operation. The propagation of orientation errors caused by noise perturbing gyroscope signals is identified as the critical cause of such drift. By simulation we examine the significance of individual noise processes perturbing the gyroscope signals, identifying white noise as the process which contributes most to the overall drift of the system. Sensor fusion and domain specific constraints can be used to reduce drift in INSs. For an example INS we show that sensor fusion using magnetometers can reduce the average error in position obtained by the system after 60 seconds from over 150 m to around 5 m. We conclude that whilst MEMS IMU technology is rapidly improving, it is not yet possible to build a MEMS based INS which gives sub-meter position accuracy for more than one minute of operation.",
"title": ""
}
] | [
{
"docid": "neg:1840162_0",
"text": "We introduce a collective action model of institutional innovation. This model, based on converging perspectives from the technology innovation management and social movements literature, views institutional change as a dialectical process in which partisan actors espousing conflicting views confront each other and engage in political behaviors to create and change institutions. The model represents an important complement to existing models of institutional change. We discuss how these models together account for various stages and cycles of institutional change.",
"title": ""
},
{
"docid": "neg:1840162_1",
"text": "Gigantomastia by definition means bilateral benign progressive breast enlargement to a degree that requires breast reduction surgery to remove more than 1800 g of tissue on each side. It is seen at puberty or during pregnancy. The etiology for this condition is still not clear, but surgery remains the mainstay of treatment. We present a unique case of Gigantomastia, which was neither related to puberty nor pregnancy and has undergone three operations so far for recurrence.",
"title": ""
},
{
"docid": "neg:1840162_2",
"text": "Department of Microbiology, School of Life Sciences, Bharathidasan University, Tiruchirappali 620 024, Tamilnadu, India. Department of Medical Biotechnology, Sri Ramachandra University, Porur, Chennai 600 116, Tamilnadu, India. CAS Marine Biology, Annamalai University, Parangipettai 608 502, Tamilnadu, India. Department of Zoology, DDE, Annamalai University, Annamalai Nagar 608 002, Tamilnadu, India Asian Pacific Journal of Tropical Disease (2012)S291-S295",
"title": ""
},
{
"docid": "neg:1840162_3",
"text": "This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds. The framework consists of two operators: extension and restriction, mapping point cloud functions to volumetric functions and vise-versa. A point cloud convolution is defined by pull-back of the Euclidean volumetric convolution via an extension-restriction mechanism.\n The point cloud convolution is computationally efficient, invariant to the order of points in the point cloud, robust to different samplings and varying densities, and translation invariant, that is the same convolution kernel is used at all points. PCNN generalizes image CNNs and allows readily adapting their architectures to the point cloud setting.\n Evaluation of PCNN on three central point cloud learning benchmarks convincingly outperform competing point cloud learning methods, and the vast majority of methods working with more informative shape representations such as surfaces and/or normals.",
"title": ""
},
{
"docid": "neg:1840162_4",
"text": "Objective To develop a classifier that tackles the problem of determining the risk of a patient of suffering from a cardiovascular disease within the next ten years. The system has to provide both a diagnosis and an interpretable model explaining the decision. In this way, doctors are able to analyse the usefulness of the information given by the system. Methods Linguistic fuzzy rule-based classification systems are used, since they provide a good classification rate and a highly interpretable model. More specifically, a new methodology to combine fuzzy rule-based classification systems with interval-valued fuzzy sets is proposed, which is composed of three steps: 1) the modelling of the linguistic labels of the classifier using interval-valued fuzzy sets; 2) the use of theKα operator in the inference process and 3) the application of a genetic tuning to find the best ignorance degree that each interval-valued fuzzy set represents as well as the best value for the parameter α of theKα operator in each rule. Results Correspondingauthor. Tel:+34-948166048. Fax:+34-948168924 Email addresses: joseantonio.sanz@unavarra.es (Jośe Antonio Sanz ), mikel.galar@unavarra.es (Mikel Galar),aranzazu.jurio@unavarra.es (Aranzazu Jurio), antonio.brugos@unavarra.es (Antonio Brugos), miguel.pagola@unavarra.es (Miguel Pagola),bustince@unavarra.es (Humberto Bustince) Preprint submitted to Elsevier November 13, 2013 © 2013. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/",
"title": ""
},
{
"docid": "neg:1840162_5",
"text": "CRISPR–Cas systems provide microbes with adaptive immunity by employing short DNA sequences, termed spacers, that guide Cas proteins to cleave foreign DNA. Class 2 CRISPR–Cas systems are streamlined versions, in which a single RNA-bound Cas protein recognizes and cleaves target sequences. The programmable nature of these minimal systems has enabled researchers to repurpose them into a versatile technology that is broadly revolutionizing biological and clinical research. However, current CRISPR–Cas technologies are based solely on systems from isolated bacteria, leaving the vast majority of enzymes from organisms that have not been cultured untapped. Metagenomics, the sequencing of DNA extracted directly from natural microbial communities, provides access to the genetic material of a huge array of uncultivated organisms. Here, using genome-resolved metagenomics, we identify a number of CRISPR–Cas systems, including the first reported Cas9 in the archaeal domain of life, to our knowledge. This divergent Cas9 protein was found in little-studied nanoarchaea as part of an active CRISPR–Cas system. In bacteria, we discovered two previously unknown systems, CRISPR–CasX and CRISPR–CasY, which are among the most compact systems yet discovered. Notably, all required functional components were identified by metagenomics, enabling validation of robust in vivo RNA-guided DNA interference activity in Escherichia coli. Interrogation of environmental microbial communities combined with in vivo experiments allows us to access an unprecedented diversity of genomes, the content of which will expand the repertoire of microbe-based biotechnologies.",
"title": ""
},
{
"docid": "neg:1840162_6",
"text": "We explore the question of whether phase-based time-of-flight (TOF) range cameras can be used for looking around corners and through scattering diffusers. By connecting TOF measurements with theory from array signal processing, we conclude that performance depends on two primary factors: camera modulation frequency and the width of the specular lobe (“shininess”) of the wall. For purely Lambertian walls, commodity TOF sensors achieve resolution on the order of meters between targets. For seemingly diffuse walls, such as posterboard, the resolution is drastically reduced, to the order of 10cm. In particular, we find that the relationship between reflectance and resolution is nonlinear—a slight amount of shininess can lead to a dramatic improvement in resolution. Since many realistic scenes exhibit a slight amount of shininess, we believe that off-the-shelf TOF cameras can look around corners.",
"title": ""
},
{
"docid": "neg:1840162_7",
"text": "E-learning systems have gained nowadays a large student community due to the facility of use and the integration of one-to-one service. Indeed, the personalization of the learning process for every user is needed to increase the student satisfaction and learning efficiency. Nevertheless, the number of students who give up their learning process cannot be neglected. Therefore, it is mandatory to establish an efficient way to assess the level of personalization in such systems. In fact, assessing represents the evolution’s key in every personalized application and especially for the e-learning systems. Besides, when the e-learning system can decipher the student personality, the student learning process will be stabilized, and the dropout rate will be decreased. In this context, we propose to evaluate the personalization process in an e-learning platform using an intelligent referential system based on agents. It evaluates any recommendation made by the e-learning platform based on a comparison. We compare the personalized service of the e-learning system and those provided by our referential system. Therefore, our purpose consists in increasing the efficiency of the proposed system to obtain a significant assessment result; precisely, the aim is to improve the outcomes of every algorithm used in each defined agent. This paper deals with the intelligent agent ‘Mod-Knowledge’ responsible for analyzing the student interaction to trace the student knowledge state. The originality of this agent is that it treats the external and the internal student interactions using machine learning algorithms to obtain a complete view of the student knowledge state. The validation of this contribution is done with experiments showing that the proposed algorithms outperform the existing ones.",
"title": ""
},
{
"docid": "neg:1840162_8",
"text": "Counting codes makes qualitative content analysis a controversial approach to analyzing textual data. Several decades ago, mainstream content analysis rejected qualitative content analysis on the grounds that it was not sufficiently quantitative; today, it is often charged with not being sufficiently qualitative. This article argues that qualitative content analysis is distinctively qualitative in both its approach to coding and its interpretations of counts from codes. Rather than argue over whether to do qualitative content analysis, researchers must make informed decisions about when to use it in analyzing qualitative data.",
"title": ""
},
{
"docid": "neg:1840162_9",
"text": "Distributed data is ubiquitous in modern information driven applications. With multiple sources of data, the natural challenge is to determine how to collaborate effectively across proprietary organizational boundaries while maximizing the utility of collected information. Since using only local data gives suboptimal utility, techniques for privacy-preserving collaborative knowledge discovery must be developed. Existing cryptography-based work for privacy-preserving data mining is still too slow to be effective for large scale data sets to face today's big data challenge. Previous work on random decision trees (RDT) shows that it is possible to generate equivalent and accurate models with much smaller cost. We exploit the fact that RDTs can naturally fit into a parallel and fully distributed architecture, and develop protocols to implement privacy-preserving RDTs that enable general and efficient distributed privacy-preserving knowledge discovery.",
"title": ""
},
{
"docid": "neg:1840162_10",
"text": "Recent proliferation of Unmanned Aerial Vehicles (UAVs) into the commercial space has been accompanied by a similar growth in aerial imagery . While useful in many applications, the utility of this visual data is limited in comparison with the total range of desired airborne missions. In this work, we extract depth of field information from monocular images from UAV on-board cameras using a single frame of data per-mapping. Several methods have been previously used with varying degrees of success for similar spatial inferencing tasks, however we sought to take a different approach by framing this as an augmented style-transfer problem. In this work, we sought to adapt two of the state-of-theart style transfer methods to the problem of depth mapping. The first method adapted was based on the unsupervised Pix2Pix approach. The second was developed using a cyclic generative adversarial network (cycle GAN). In addition to these two approaches, we also implemented a baseline algorithm previously used for depth map extraction on indoor scenes, the multi-scale deep network. Using the insights gained from these implementations, we then developed a new methodology to overcome the shortcomings observed that was inspired by recent work in perceptual feature-based style transfer. These networks were trained on matched UAV perspective visual image, depth-map pairs generated using Microsoft’s AirSim high-fidelity UAV simulation engine and environment. The performance of each network was tested using a reserved test set at the end of training and the effectiveness evaluated using against three metrics. While our new network was not able to outperform any of the other approaches but cycle GANs, we believe that the intuition behind the approach was demonstrated to be valid and that it may be successfully refined with future work.",
"title": ""
},
{
"docid": "neg:1840162_11",
"text": "In this paper, we focus on designing an online credit card fraud detection framework with big data technologies, by which we want to achieve three major goals: 1) the ability to fuse multiple detection models to improve accuracy, 2) the ability to process large amount of data and 3) the ability to do the detection in real time. To accomplish that, we propose a general workflow, which satisfies most design ideas of current credit card fraud detection systems. We further implement the workflow with a new framework which consists of four layers: distributed storage layer, batch training layer, key-value sharing layer and streaming detection layer. With the four layers, we are able to support massive trading data storage, fast detection model training, quick model data sharing and real-time online fraud detection, respectively. We implement it with latest big data technologies like Hadoop, Spark, Storm, HBase, etc. A prototype is implemented and tested with a synthetic dataset, which shows great potentials of achieving the above goals.",
"title": ""
},
{
"docid": "neg:1840162_12",
"text": "Active testing has recently been introduced to effectively test concurrent programs. Active testing works in two phases. It first uses predictive off-the-shelf static or dynamic program analyses to identify potential concurrency bugs, such as data races, deadlocks, and atomicity violations. In the second phase, active testing uses the reports from these predictive analyses to explicitly control the underlying scheduler of the concurrent program to accurately and quickly discover real concurrency bugs, if any, with very high probability and little overhead. In this paper, we present an extensible framework for active testing of Java programs. The framework currently implements three active testers based on data races, atomic blocks, and deadlocks.",
"title": ""
},
{
"docid": "neg:1840162_13",
"text": "Diehl Aerospace GmbH (DAs) is currently involved in national German Research & Technology (R&T) projects (e.g. SYSTAVIO, SESAM) and in European R&T projects like ASHLEY to extend and to improve the Integrated Modular Avionics (IMA) technology. Diehl Aerospace is investing to expand its current IMA technology to enable further integration of systems including hardware modules, associated software, tools and processes while increasing the level of standardization. An additional objective is to integrate more systems on a common computing platform which uses the same toolchain, processes and integration experiences. New IMA components enable integration of high integrity fast loop system applications such as control applications. Distributed architectures which provide new types of interfaces allow integration of secondary power distribution systems along with other IMA functions. Cross A/C type usage is also a future emphasis to increase standardization and decrease development and operating costs as well as improvements on time to market and affordability of systems.",
"title": ""
},
{
"docid": "neg:1840162_14",
"text": "3D Morphable Models (3DMMs) are powerful statistical models of 3D facial shape and texture, and among the state-of-the-art methods for reconstructing facial shape from single images. With the advent of new 3D sensors, many 3D facial datasets have been collected containing both neutral as well as expressive faces. However, all datasets are captured under controlled conditions. Thus, even though powerful 3D facial shape models can be learnt from such data, it is difficult to build statistical texture models that are sufficient to reconstruct faces captured in unconstrained conditions (in-the-wild). In this paper, we propose the first, to the best of our knowledge, in-the-wild 3DMM by combining a powerful statistical model of facial shape, which describes both identity and expression, with an in-the-wild texture model. We show that the employment of such an in-the-wild texture model greatly simplifies the fitting procedure, because there is no need to optimise with regards to the illumination parameters. Furthermore, we propose a new fast algorithm for fitting the 3DMM in arbitrary images. Finally, we have captured the first 3D facial database with relatively unconstrained conditions and report quantitative evaluations with state-of-the-art performance. Complementary qualitative reconstruction results are demonstrated on standard in-the-wild facial databases.",
"title": ""
},
{
"docid": "neg:1840162_15",
"text": "Falls are responsible for considerable morbidity, immobility, and mortality among older persons, especially those living in nursing homes. Falls have many different causes, and several risk factors that predispose patients to falls have been identified. To prevent falls, a systematic therapeutic approach to residents who have fallen is necessary, and close attention must be paid to identifying and reducing risk factors for falls among frail older persons who have not yet fallen. We review the problem of falls in the nursing home, focusing on identifiable causes, risk factors, and preventive approaches. Epidemiology Both the incidence of falls in older adults and the severity of complications increase steadily with age and increased physical disability. Accidents are the fifth leading cause of death in older adults, and falls constitute two thirds of these accidental deaths. About three fourths of deaths caused by falls in the United States occur in the 13% of the population aged 65 years and older [1, 2]. Approximately one third of older adults living at home will fall each year, and about 5% will sustain a fracture or require hospitalization. The incidence of falls and fall-related injuries among persons living in institutions has been reported in numerous epidemiologic studies [3-18]. These data are presented in Table 1. The mean fall incidence calculated from these studies is about three times the rate for community-living elderly persons (mean, 1.5 falls/bed per year), caused both by the more frail nature of persons living in institutions and by more accurate reporting of falls in institutions. Table 1. Incidence of Falls and Fall-Related Injuries in Long-Term Care Facilities* As shown in Table 1, only about 4% of falls (range, 1% to 10%) result in fractures, whereas other serious injuries such as head trauma, soft-tissue injuries, and severe lacerations occur in about 11% of falls (range, 1% to 36%). However, once injured, an elderly person who has fallen has a much higher case fatality rate than does a younger person who has fallen [1, 2]. Each year, about 1800 fatal falls occur in nursing homes. Among persons 85 years and older, 1 of 5 fatal falls occurs in a nursing home [19]. Nursing home residents also have a disproportionately high incidence of hip fracture and have been shown to have higher mortality rates after hip fracture than community-living elderly persons [20]. Furthermore, because of the high frequency of recurrent falls in nursing homes, the likelihood of sustaining an injurious fall is substantial. In addition to injuries, falls can have serious consequences for physical functioning and quality of life. Loss of function can result from both fracture-related disability and self-imposed functional limitations caused by fear of falling and the postfall anxiety syndrome. Decreased confidence in the ability to ambulate safely can lead to further functional decline, depression, feelings of helplessness, and social isolation. In addition, the use of physical or chemical restraints by institutional staff to prevent high-risk persons from falling also has negative effects on functioning. Causes of Falls The major reported immediate causes of falls and their relative frequencies as described in four detailed studies of nursing home populations [14, 15, 17, 21] are presented in Table 2. The Table also contains a comparison column of causes of falls among elderly persons not living in institutions as summarized from seven detailed studies [21-28]. The distribution of causes clearly differs among the populations studied. Frail, high-risk persons living in institutions tend to have a higher incidence of falls caused by gait disorders, weakness, dizziness, and confusion, whereas the falls of community-living persons are more related to their environment. Table 2. Comparison of Causes of Falls in Nursing Home and Community-Living Populations: Summary of Studies That Carefully Evaluated Elderly Persons after a Fall and Specified a Most Likely Cause In the nursing home, weakness and gait problems were the most common causes of falls, accounting for about a quarter of reported cases. Studies have reported that the prevalence of detectable lower-extremity weakness ranges from 48% among community-living older persons [29] to 57% among residents of an intermediate-care facility [30] to more than 80% of residents of a skilled nursing facility [27]. Gait disorders affect 20% to 50% of elderly persons [31], and nearly three quarters of nursing home residents require assistance with ambulation or cannot ambulate [32]. Investigators of casecontrol studies in nursing homes have reported that more than two thirds of persons who have fallen have substantial gait disorders, a prevalence 2.4 to 4.8 times higher than the prevalence among persons who have not fallen [27, 30]. The cause of muscle weakness and gait problems is multifactorial. Aging introduces physical changes that affect strength and gait. On average, healthy older persons score 20% to 40% lower on strength tests than young adults [33], and, among chronically ill nursing home residents, strength is considerably less than that. Much of the weakness seen in the nursing home stems from deconditioning due to prolonged bedrest or limited physical activity and chronic debilitating medical conditions such as heart failure, stroke, or pulmonary disease. Aging is also associated with other deteriorations that impair gait, including increased postural sway; decreased gait velocity, stride length, and step height; prolonged reaction time; and decreased visual acuity and depth perception. Gait problems can also stem from dysfunction of the nervous, musculoskeletal, circulatory, or respiratory systems, as well as from simple deconditioning after a period of inactivity. Dizziness is commonly reported by elderly persons who have fallen and was the attributed cause in 25% of reported nursing home falls. This symptom is often difficult to evaluate because dizziness means different things to different people and has diverse causes. True vertigo, a sensation of rotational movement, may indicate a disorder of the vestibular apparatus such as benign positional vertigo, acute labyrinthitis, or Meniere disease. Symptoms described as imbalance on walking often reflect a gait disorder. Many residents describe a vague light-headedness that may reflect cardiovascular problems, hyperventilation, orthostatic hypotension, drug side effect, anxiety, or depression. Accidents, or falls stemming from environmental hazards, are a major cause of reported falls16% of nursing home falls and 41% of community falls. However, the circumstances of accidents are difficult to verify, and many falls in this category may actually stem from interactions between environmental hazards or hazardous activities and increased individual susceptibility to hazards because of aging and disease. Among impaired residents, even normal activities of daily living might be considered hazardous if they are done without assistance or modification. Factors such as decreased lower-extremity strength, poor posture control, and decreased step height all interact to impair the ability to avoid a fall after an unexpected trip or while reaching or bending. Age-associated impairments of vision, hearing, and memory also tend to increase the number of trips. Studies have shown that most falls in nursing homes occurred during transferring from a bed, chair, or wheelchair [3, 11]. Attempting to move to or from the bathroom and nocturia (which necessitates frequent trips to the bathroom) have also been reported to be associated with falls [34, 35] and fall-related fractures [9]. Environmental hazards that frequently contribute to these falls include wet floors caused by episodes of incontinence, poor lighting, bedrails, and improper bed height. Falls have also been reported to increase when nurse staffing is low, such as during breaks and at shift changes [4, 7, 9, 13], presumably because of lack of staff supervision. Confusion and cognitive impairment are frequently cited causes of falls and may reflect an underlying systemic or metabolic process (for example, electrolyte imbalance or fever). Dementia can increase the number of falls by impairing judgment, visual-spatial perception, and ability to orient oneself geographically. Falls also occur when residents with dementia wander, attempt to get out of wheelchairs, or climb over bed siderails. Orthostatic (postural) hypotension, usually defined as a decrease of 20 mm or more of systolic blood pressure after standing, has a 5% to 25% prevalence among normal elderly persons living at home [36]. It is even more common among persons with certain predisposing risk factors, including autonomic dysfunction, hypovolemia, low cardiac output, parkinsonism, metabolic and endocrine disorders, and medications (particularly sedatives, antihypertensives, vasodilators, and antidepressants) [37]. The orthostatic drop may be more pronounced on arising in the morning because the baroreflex response is diminished after prolonged recumbency, as it is after meals and after ingestion of nitroglycerin [38, 39]. Yet, despite its high prevalence, orthostatic hypotension infrequently causes falls, particularly outside of institutions. This is perhaps because of its transient nature, which makes it difficult to detect after the fall, or because most persons with orthostatic hypotension feel light-headed and will deliberately find a seat rather than fall. Drop attacks are defined as sudden falls without loss of consciousness and without dizziness, often precipitated by a sudden change in head position. This syndrome has been attributed to transient vertebrobasilar insufficiency, although it is probably caused by more diverse pathophysiologic mechanisms. Although early descriptions of geriatric falls identified drop attacks as a substantial cause, more recent studies have reported a smaller proportion of perso",
"title": ""
},
{
"docid": "neg:1840162_16",
"text": "Linear active-power-only power flow approximations are pervasive in the planning and control of power systems. However, AC power systems are governed by a system of nonlinear non-convex power flow equations. Existing linear approximations fail to capture key power flow variables including reactive power and voltage magnitudes, both of which are necessary in many applications that require voltage management and AC power flow feasibility. This paper proposes novel linear-programming models (the LPAC models) that incorporate reactive power and voltage magnitudes in a linear power flow approximation. The LPAC models are built on a polyhedral relaxation of the cosine terms in the AC equations, as well as Taylor approximations of the remaining nonlinear terms. Experimental comparisons with AC solutions on a variety of standard IEEE and Matpower benchmarks show that the LPAC models produce accurate values for active and reactive power, phase angles, and voltage magnitudes. The potential benefits of the LPAC models are illustrated on two “proof-of-concept” studies in power restoration and capacitor placement.",
"title": ""
},
{
"docid": "neg:1840162_17",
"text": "Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings.",
"title": ""
}
] |
1840163 | Heart rate monitoring from wrist-type PPG based on singular spectrum analysis with motion decision | [
{
"docid": "pos:1840163_0",
"text": "Heart rate monitoring from wrist-type photoplethysmographic (PPG) signals during subjects' intensive exercise is a difficult problem, since the PPG signals are contaminated by extremely strong motion artifacts caused by subjects' hand movements. In this work, we formulate the heart rate estimation problem as a sparse signal recovery problem, and use a sparse signal recovery algorithm to calculate high-resolution power spectra of PPG signals, from which heart rates are estimated by selecting corresponding spectrum peaks. To facilitate the use of sparse signal recovery, we propose using bandpass filtering, singular spectrum analysis, and temporal difference operation to partially remove motion artifacts and sparsify PPG spectra. The proposed method was tested on PPG recordings from 10 subjects who were fast running at the peak speed of 15km/hour. The results showed that the averaged absolute estimation error was only 2.56 Beats/Minute, or 1.94% error compared to ground-truth heart rates from simultaneously recorded ECG.",
"title": ""
}
] | [
{
"docid": "neg:1840163_0",
"text": "Access to genetic and genomic resources can greatly facilitate biological understanding of plant species leading to improved crop varieties. While model plant species such as Arabidopsis have had nearly two decades of genetic and genomic resource development, many major crop species have seen limited development of these resources due to the large, complex nature of their genomes. Cultivated potato is among the ranks of crop species that, despite substantial worldwide acreage, have seen limited genetic and genomic tool development. As technologies advance, this paradigm is shifting and a number of tools are being developed for important crop species such as potato. This review article highlights numerous tools that have been developed for the potato community with a specific focus on the reference de novo genome assembly and annotation, genetic markers, transcriptomics resources, and newly emerging resources that extend beyond a single reference individual. El acceso a los recursos genéticos y genómicos puede facilitar en gran medida el entendimiento biológico de las especies de plantas, lo que conduce a variedades mejoradas de cultivos. Mientras que el modelo de las especies de plantas como Arabidopsis ha tenido cerca de dos décadas de desarrollo de recursos genéticos y genómicos, muchas especies de cultivos principales han visto desarrollo limitado de estos recursos debido a la naturaleza grande, compleja, de sus genomios. La papa cultivada está ubicada entre las especies de plantas que a pesar de su superficie substancial mundial, ha visto limitado el desarrollo de las herramientas genéticas y genómicas. A medida que avanzan las tecnologías, este paradigma está girando y se han estado desarrollando un número de herramientas para especies importantes de cultivo tales como la papa. Este artículo de revisión resalta las numerosas herramientas que se han desarrollado para la comunidad de la papa con un enfoque específico en la referencia de ensamblaje y registro de genomio de novo, marcadores genéticos, recursos transcriptómicos, y nuevas fuentes emergentes que se extienden más allá de la referencia de un único individuo.",
"title": ""
},
{
"docid": "neg:1840163_1",
"text": "Generative Adversarial Networks (GANs) have recently achieved significant improvement on paired/unpaired image-to-image translation, such as photo→ sketch and artist painting style transfer. However, existing models can only be capable of transferring the low-level information (e.g. color or texture changes), but fail to edit high-level semantic meanings (e.g., geometric structure or content) of objects. On the other hand, while some researches can synthesize compelling real-world images given a class label or caption, they cannot condition on arbitrary shapes or structures, which largely limits their application scenarios and interpretive capability of model results. In this work, we focus on a more challenging semantic manipulation task, which aims to modify the semantic meaning of an object while preserving its own characteristics (e.g. viewpoints and shapes), such as cow→sheep, motor→ bicycle, cat→dog. To tackle such large semantic changes, we introduce a contrasting GAN (contrast-GAN) with a novel adversarial contrasting objective. Instead of directly making the synthesized samples close to target data as previous GANs did, our adversarial contrasting objective optimizes over the distance comparisons between samples, that is, enforcing the manipulated data be semantically closer to the real data with target category than the input data. Equipped with the new contrasting objective, a novel mask-conditional contrast-GAN architecture is proposed to enable disentangle image background with object semantic changes. Experiments on several semantic manipulation tasks on ImageNet and MSCOCO dataset show considerable performance gain by our contrast-GAN over other conditional GANs. Quantitative results further demonstrate the superiority of our model on generating manipulated results with high visual fidelity and reasonable object semantics.",
"title": ""
},
{
"docid": "neg:1840163_2",
"text": "Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as data augmentation. The dominant data augmentation approach in the field assumes that new training samples can be obtained via random geometric or appearance transformations applied to annotated training samples, but this is a strong assumption because it is unclear if this is a reliable generative model for producing new training samples. In this paper, we provide a novel Bayesian formulation to data augmentation, where new annotated training points are treated as missing variables and generated based on the distribution learned from the training set. For learning, we introduce a theoretically sound algorithm — generalised Monte Carlo expectation maximisation, and demonstrate one possible implementation via an extension of the Generative Adversarial Network (GAN). Classification results on MNIST, CIFAR-10 and CIFAR-100 show the better performance of our proposed method compared to the current dominant data augmentation approach mentioned above — the results also show that our approach produces better classification results than similar GAN models.",
"title": ""
},
{
"docid": "neg:1840163_3",
"text": "The potential cardiovascular benefits of several trending foods and dietary patterns are still incompletely understood, and nutritional science continues to evolve. However, in the meantime, a number of controversial dietary patterns, foods, and nutrients have received significant media exposure and are mired by hype. This review addresses some of the more popular foods and dietary patterns that are promoted for cardiovascular health to provide clinicians with accurate information for patient discussions in the clinical setting.",
"title": ""
},
{
"docid": "neg:1840163_4",
"text": "The past 10 years of event ordering research has focused on learning partial orderings over document events and time expressions. The most popular corpus, the TimeBank, contains a small subset of the possible ordering graph. Many evaluations follow suit by only testing certain pairs of events (e.g., only main verbs of neighboring sentences). This has led most research to focus on specific learners for partial labelings. This paper attempts to nudge the discussion from identifying some relations to all relations. We present new experiments on strongly connected event graphs that contain ∼10 times more relations per document than the TimeBank. We also describe a shift away from the single learner to a sieve-based architecture that naturally blends multiple learners into a precision-ranked cascade of sieves. Each sieve adds labels to the event graph one at a time, and earlier sieves inform later ones through transitive closure. This paper thus describes innovations in both approach and task. We experiment on the densest event graphs to date and show a 14% gain over state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840163_5",
"text": "Fetal mortality rate is considered a good measure of the quality of health care in a country or a medical facility. If we look at the current scenario, we find that we have focused more on child mortality rate than on fetus mortality. Even it is a same situation in developed country. Our aim is to provide technological solution to help decrease the fetal mortality rate. Also if we consider pregnant women, they have to come to hospital 2-3 times a week for their regular checkups. It becomes a problem for working women and women having diabetes or other disease. For these reasons it would be very helpful if they can do this by themselves at home. This will reduce the frequency of their visit to the hospital at same time cause no compromise in the wellbeing of both the mother and the child. The end to end system consists of wearable sensors, built into a fabric belt, that collects and sends vital signs of patients via bluetooth to smart mobile phones for further processing and made available to required personnel allowing efficient monitoring and alerting when attention is required in often challenging and chaotic scenarios.",
"title": ""
},
{
"docid": "neg:1840163_6",
"text": "Weakly-supervised semantic image segmentation suffers from lacking accurate pixel-level annotations. In this paper, we propose a novel graph convolutional network-based method, called GraphNet, to learn pixel-wise labels from weak annotations. Firstly, we construct a graph on the superpixels of a training image by combining the low-level spatial relation and high-level semantic content. Meanwhile, scribble or bounding box annotations are embedded into the graph, respectively. Then, GraphNet takes the graph as input and learns to predict high-confidence pseudo image masks by a convolutional network operating directly on graphs. At last, a segmentation network is trained supervised by these pseudo image masks. We comprehensively conduct experiments on the PASCAL VOC 2012 and PASCAL-CONTEXT segmentation benchmarks. Experimental results demonstrate that GraphNet is effective to predict the pixel labels with scribble or bounding box annotations. The proposed framework yields state-of-the-art results in the community.",
"title": ""
},
{
"docid": "neg:1840163_7",
"text": "The project proposes an efficient implementation for IoT (Internet of Things) used for monitoring and controlling the home appliances via World Wide Web. Home automation system uses the portable devices as a user interface. They can communicate with home automation network through an Internet gateway, by means of low power communication protocols like Zigbee, Wi-Fi etc. This project aims at controlling home appliances via Smartphone using Wi-Fi as communication protocol and raspberry pi as server system. The user here will move directly with the system through a web-based interface over the web, whereas home appliances like lights, fan and door lock are remotely controlled through easy website. An extra feature that enhances the facet of protection from fireplace accidents is its capability of sleuthing the smoke in order that within the event of any fireplace, associates an alerting message and an image is sent to Smartphone. The server will be interfaced with relay hardware circuits that control the appliances running at home. The communication with server allows the user to select the appropriate device. The communication with server permits the user to pick out the acceptable device. The server communicates with the corresponding relays. If the web affiliation is down or the server isn't up, the embedded system board still will manage and operate the appliances domestically. By this we provide a climbable and price effective Home Automation system.",
"title": ""
},
{
"docid": "neg:1840163_8",
"text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.",
"title": ""
},
{
"docid": "neg:1840163_9",
"text": "Multi-document summarization has been an important problem in information retrieval. It aims to distill the most important information from a set of documents to generate a compressed summary. Given a sentence graph generated from a set of documents where vertices represent sentences and edges indicate that the corresponding vertices are similar, the extracted summary can be described using the idea of graph domination. In this paper, we propose a new principled and versatile framework for multi-document summarization using the minimum dominating set. We show that four well-known summarization tasks including generic, query-focused, update, and comparative summarization can be modeled as different variations derived from the proposed framework. Approximation algorithms for performing summarization are also proposed and empirical experiments are conducted to demonstrate the effectiveness of our proposed framework.",
"title": ""
},
{
"docid": "neg:1840163_10",
"text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.",
"title": ""
},
{
"docid": "neg:1840163_11",
"text": "Neural machine translation (NMT) systems have recently achieved results comparable to the state of the art on a few translation tasks, including English→French and English→German. The main purpose of the Montreal Institute for Learning Algorithms (MILA) submission to WMT’15 is to evaluate this new approach on a greater variety of language pairs. Furthermore, the human evaluation campaign may help us and the research community to better understand the behaviour of our systems. We use the RNNsearch architecture, which adds an attention mechanism to the encoderdecoder. We also leverage some of the recent developments in NMT, including the use of large vocabularies, unknown word replacement and, to a limited degree, the inclusion of monolingual language models.",
"title": ""
},
{
"docid": "neg:1840163_12",
"text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.",
"title": ""
},
{
"docid": "neg:1840163_13",
"text": "We propose to use question answering (QA) data from Web forums to train chatbots from scratch, i.e., without dialog training data. First, we extract pairs of question and answer sentences from the typically much longer texts of questions and answers in a forum. We then use these shorter texts to train seq2seq models in a more efficient way. We further improve the parameter optimization using a new model selection strategy based on QA measures. Finally, we propose to use extrinsic evaluation with respect to a QA task as an automatic evaluation method for chatbots. The evaluation shows that the model achieves a MAP of 63.5% on the extrinsic task. Moreover, it can answer correctly 49.5% of the questions when they are similar to questions asked in the forum, and 47.3% of the questions when they are more conversational in style.",
"title": ""
},
{
"docid": "neg:1840163_14",
"text": "Information and communication technology (ICT) is integral in today’s healthcare as a critical piece of support to both track and improve patient and organizational outcomes. Facilitating nurses’ informatics competency development through continuing education is paramount to enhance their readiness to practice safely and accurately in technologically enabled work environments. In this article, we briefly describe progress in nursing informatics (NI) and share a project exemplar that describes our experience in the design, implementation, and evaluation of a NI educational event, a one-day boot camp format that was used to provide foundational knowledge in NI targeted primarily at frontline nurses in Alberta, Canada. We also discuss the project outcomes, including lessons learned and future implications. Overall, the boot camp was successful to raise nurses’ awareness about the importance of informatics in nursing practice.",
"title": ""
},
{
"docid": "neg:1840163_15",
"text": "Identifying sparse salient structures from dense pixels is a longstanding problem in visual computing. Solutions to this problem can benefit both image manipulation and understanding. In this paper, we introduce an image transform based on the L1 norm for piecewise image flattening. This transform can effectively preserve and sharpen salient edges and contours while eliminating insignificant details, producing a nearly piecewise constant image with sparse structures. A variant of this image transform can perform edge-preserving smoothing more effectively than existing state-of-the-art algorithms. We further present a new method for complex scene-level intrinsic image decomposition. Our method relies on the above image transform to suppress surface shading variations, and perform probabilistic reflectance clustering on the flattened image instead of the original input image to achieve higher accuracy. Extensive testing on the Intrinsic-Images-in-the-Wild database indicates our method can perform significantly better than existing techniques both visually and numerically. The obtained intrinsic images have been successfully used in two applications, surface retexturing and 3D object compositing in photographs.",
"title": ""
},
{
"docid": "neg:1840163_16",
"text": "Studies of search habits reveal that people engage in many search tasks involving collaboration with others, such as travel planning, organizing social events, or working on a homework assignment. However, current Web search tools are designed for a single user, working alone. We introduce SearchTogether, a prototype that enables groups of remote users to synchronously or asynchronously collaborate when searching the Web. We describe an example usage scenario, and discuss the ways SearchTogether facilitates collaboration by supporting awareness, division of labor, and persistence. We then discuss the findings of our evaluation of SearchTogether, analyzing which aspects of its design enabled successful collaboration among study participants.",
"title": ""
},
{
"docid": "neg:1840163_17",
"text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.",
"title": ""
},
{
"docid": "neg:1840163_18",
"text": "Software industry is heading towards centralized computin g. Due to this trend data and programs are being taken away from traditional desktop PCs and placed in compute clouds instead. Compute clouds are enormous server farms packed with computing power and storage space accessible through the Internet. Instead of having to manage one’s own infrastructure to run applications, server time and storage space can can be bought from an external service provider. From the customers’ point of view the benefit behind this idea is to be able to dynamically adjust computing power up or down to meet the demand for that power at a particular moment. This kind of flexibility not only ensures that no costs are incurred by excess processing capacity, but also enables hard ware infrastructure to scale up with business growth. Because of growing interest in taking advantage of cloud computing a number of service providers are working on providing cloud services. As stated in [7], Amazon, Salerforce.co m and Google are examples of firms that already have working solutions on the market. Recently also Microsoft released a preview version of its cloud platform called the Azure. Earl y adopters can test the platform and development tools free of charge.[2, 3, 4] The main purpose of this paper is to shed light on the internals of Microsoft’s Azure platform. In addition to examinin g how Azure platform works, the benefits of Azure platform are explored. The most important benefit in Microsoft’s solu tion is that it resembles existing Windows environment a lot . Developers can use the same application programming interfaces (APIs) and development tools they are already used to. The second benefit is that migrating applications to cloud is easy. This partially stems from the fact that Azure’s servic es can be exploited by an application whether it is run locally or in the cloud.",
"title": ""
},
{
"docid": "neg:1840163_19",
"text": "The 3L-NPC (three-level neutral-point-clamped) is the most popular multilevel converter used in high-power medium-voltage applications. An important disadvantage of this structure is the unequal distribution of losses among the switches. The performances of 3L-NPC structure were improved by developing the 3L-ANPC (Active-NPC) converter which has more degrees of freedom. In this paper the switching states and the loss distribution problem are studied for different PWM strategies. A new PWM strategy is also proposed in the paper. It has numerous advantages: (a) natural doubling of the apparent switching frequency without using the flying-capacitor concept, (b) dead times do not influence the operating mode at 50% of the duty cycle, (c) operating at both high and small switching frequencies without structural modifications and (d) better balancing of loss distribution in switches. The PSIM simulation results are shown in order to validate the proposed PWM strategy and the analysis of the switching states.",
"title": ""
}
] |
1840164 | Privometer: Privacy protection in social networks | [
{
"docid": "pos:1840164_0",
"text": "This paper presents NetKit, a modular toolkit for classifica tion in networked data, and a case-study of its application to a collection of networked data sets use d in prior machine learning research. Networked data are relational data where entities are inter connected, and this paper considers the common case where entities whose labels are to be estimated a re linked to entities for which the label is known. NetKit is based on a three-component framewo rk, comprising a local classifier, a relational classifier, and a collective inference procedur . Various existing relational learning algorithms can be instantiated with appropriate choices for the se three components and new relational learning algorithms can be composed by new combinations of c omponents. The case study demonstrates how the toolkit facilitates comparison of differen t learning methods (which so far has been lacking in machine learning research). It also shows how the modular framework allows analysis of subcomponents, to assess which, whether, and when partic ul components contribute to superior performance. The case study focuses on the simple but im portant special case of univariate network classification, for which the only information avai lable is the structure of class linkage in the network (i.e., only links and some class labels are avail ble). To our knowledge, no work previously has evaluated systematically the power of class-li nkage alone for classification in machine learning benchmark data sets. The results demonstrate clea rly th t simple network-classification models perform remarkably well—well enough that they shoul d be used regularly as baseline classifiers for studies of relational learning for networked dat a. The results also show that there are a small number of component combinations that excel, and that different components are preferable in different situations, for example when few versus many la be s are known.",
"title": ""
}
] | [
{
"docid": "neg:1840164_0",
"text": "We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of the weights. Using a two parameter field theory, we find that the network can break such symmetries itself towards the end of training in a process commonly known in physics as spontaneous symmetry breaking. This corresponds to a network generalizing itself without any user input layers to break the symmetry, but by communication with adjacent layers. In the layer decoupling limit applicable to residual networks (He et al., 2015), we show that the remnant symmetries that survive the non-linear layers are spontaneously broken. The Lagrangian for the non-linear and weight layers together has striking similarities with the one in quantum field theory of a scalar. Using results from quantum field theory we show that our framework is able to explain many experimentally observed phenomena,, such as training on random labels with zero error (Zhang et al., 2017), the information bottleneck, the phase transition out of it and gradient variance explosion (Shwartz-Ziv & Tishby, 2017), shattered gradients (Balduzzi et al., 2017), and many more.",
"title": ""
},
{
"docid": "neg:1840164_1",
"text": "The projects with embedded systems are used for many different purposes, being a major challenge for the community of developers of such systems. As we benefit from technological advances the complexity of designing an embedded system increases significantly. This paper presents GERSE, a guideline to requirements elicitation for embedded systems. Despite of advances in the area of embedded systems, there is a shortage of requirements elicitation techniques that meet the particularities of this area. The contribution of GERSE is to improve the capture process and organization of the embedded systems requirements.",
"title": ""
},
{
"docid": "neg:1840164_2",
"text": "Rheumatoid arthritis (RA) is a chronic inflammatory disease characterized by synovial inflammation that can lead to structural damage of cartilage, bone and tendons. Assessing the inflammatory activity and the severity is essential in RA to help rheumatologists in adopting proper therapeutic strategies and in evaluating disease outcome and response to treatment. In the last years musculoskeletal (MS) ultrasonography (US) underwent tremendous technological development of equipment with increased sensitivity in detecting a wide set of joint and soft tissues abnormalities. In RA MSUS with the use of Doppler modalities is a useful imaging tool to depict inflammatory abnormalities (i.e. synovitis, tenosynovitis and bursitis) and structural changes (i.e. bone erosions, cartilage damage and tendon lesions). In addition, MSUS has been demonstrated to be able to monitor the response to different therapies in RA to guide local diagnostic and therapeutic procedures such as biopsy, fluid aspirations and injections. Future applications based on the development of new tools may improve the role of MSUS in RA.",
"title": ""
},
{
"docid": "neg:1840164_3",
"text": "This paper analyzes the impact of user mobility in multi-tier heterogeneous networks. We begin by obtaining the handoff rate for a mobile user in an irregular cellular network with the access point locations modeled as a homogeneous Poisson point process. The received signal-to-interference-ratio (SIR) distribution along with a chosen SIR threshold is then used to obtain the probability of coverage. To capture potential connection failures due to mobility, we assume that a fraction of handoffs result in such failures. Considering a multi-tier network with orthogonal spectrum allocation among tiers and the maximum biased average received power as the tier association metric, we derive the probability of coverage for two cases: 1) the user is stationary (i.e., handoffs do not occur, or the system is not sensitive to handoffs); 2) the user is mobile, and the system is sensitive to handoffs. We derive the optimal bias factors to maximize the coverage. We show that when the user is mobile, and the network is sensitive to handoffs, both the optimum tier association and the probability of coverage depend on the user's speed; a speed-dependent bias factor can then adjust the tier association to effectively improve the coverage, and hence system performance, in a fully-loaded network.",
"title": ""
},
{
"docid": "neg:1840164_4",
"text": "This paper presents a structured ordinal measure method for video-based face recognition that simultaneously lear ns ordinal filters and structured ordinal features. The problem is posed as a non-convex integer program problem that includes two parts. The first part learns stable ordinal filters to project video data into a large-margin ordinal space . The second seeks self-correcting and discrete codes by balancing the projected data and a rank-one ordinal matrix in a structured low-rank way. Unsupervised and supervised structures are considered for the ordinal matrix. In addition, as a complement to hierarchical structures, deep feature representations are integrated into our method to enhance coding stability. An alternating minimization metho d is employed to handle the discrete and low-rank constraints , yielding high-quality codes that capture prior structures well. Experimental results on three commonly used face video databases show that our method with a simple voting classifier can achieve state-of-the-art recognition ra tes using fewer features and samples.",
"title": ""
},
{
"docid": "neg:1840164_5",
"text": "Large amounts of heterogeneous medical data have become available in various healthcare organizations (payers, providers, pharmaceuticals). Those data could be an enabling resource for deriving insights for improving care delivery and reducing waste. The enormity and complexity of these datasets present great challenges in analyses and subsequent applications to a practical clinical environment. In this tutorial, we introduce the characteristics and related mining challenges on dealing with big medical data. Many of those insights come from medical informatics community, which is highly related to data mining but focuses on biomedical specifics. We survey various related papers from data mining venues as well as medical informatics venues to share with the audiences key problems and trends in healthcare analytics research, with different applications ranging from clinical text mining, predictive modeling, survival analysis, patient similarity, genetic data analysis, and public health. The tutorial will include several case studies dealing with some of the important healthcare applications.",
"title": ""
},
{
"docid": "neg:1840164_6",
"text": "This paper establishes a link between three areas, namely Max-Plus Linear System Theory as used for dealing with certain classes of discrete event systems, Network Calculu s for establishing time bounds in communication networks, and real-time scheduling. In particular, it is shown that im portant results from scheduling theory can be easily derive d and unified using Max-Plus Algebra. Based on the proposed network theory for real-time systems, the first polynomial algorithm for the feasibility analysis and optimal priorit y assignment for a general task model is derived.",
"title": ""
},
{
"docid": "neg:1840164_7",
"text": "We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks.",
"title": ""
},
{
"docid": "neg:1840164_8",
"text": "Scoring the quality of persuasive essays is an important goal of discourse analysis, addressed most recently with highlevel persuasion-related features such as thesis clarity, or opinions and their targets. We investigate whether argumentation features derived from a coarse-grained argumentative structure of essays can help predict essays scores. We introduce a set of argumentation features related to argument components (e.g., the number of claims and premises), argument relations (e.g., the number of supported claims) and typology of argumentative structure (chains, trees). We show that these features are good predictors of human scores for TOEFL essays, both when the coarsegrained argumentative structure is manually annotated and automatically predicted.",
"title": ""
},
{
"docid": "neg:1840164_9",
"text": "This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given to the notion of probability assignment under the self-information loss function, which is directly related to the theory of universal data compression. Both the probabilistic setting and the deterministic setting of the universal prediction problem are described with emphasis on the analogy and the differences between results in the two settings.",
"title": ""
},
{
"docid": "neg:1840164_10",
"text": "Integrating data from multiple sources has been a longstanding challenge in the database community. Techniques such as privacy-preserving data mining promises privacy, but assume data has integration has been accomplished. Data integration methods are seriously hampered by inability to share the data to be integrated. This paper lays out a privacy framework for data integration. Challenges for data integration in the context of this framework are discussed, in the context of existing accomplishments in data integration. Many of these challenges are opportunities for the data mining community.",
"title": ""
},
{
"docid": "neg:1840164_11",
"text": "While kernel drivers have long been know to poses huge security risks, due to their privileged access and lower code quality, bug-finding tools for drivers are still greatly lacking both in quantity and effectiveness. This is because the pointer-heavy code in these drivers present some of the hardest challenges to static analysis, and their tight coupling with the hardware make dynamic analysis infeasible in most cases. In this work, we present DR. CHECKER, a soundy (i.e., mostly sound) bug-finding tool for Linux kernel drivers that is based on well-known program analysis techniques. We are able to overcome many of the inherent limitations of static analysis by scoping our analysis to only the most bug-prone parts of the kernel (i.e., the drivers), and by only sacrificing soundness in very few cases to ensure that our technique is both scalable and precise. DR. CHECKER is a fully-automated static analysis tool capable of performing general bug finding using both pointer and taint analyses that are flow-sensitive, context-sensitive, and fieldsensitive on kernel drivers. To demonstrate the scalability and efficacy of DR. CHECKER, we analyzed the drivers of nine production Linux kernels (3.1 million LOC), where it correctly identified 158 critical zero-day bugs with an overall precision of 78%.",
"title": ""
},
{
"docid": "neg:1840164_12",
"text": "Deliberate self-poisoning (DSP), the most common form of deliberate self-harm, is closely associated with suicide. Identifying risk factors of DSP is necessary for implementing prevention strategies. This study aimed to evaluate the relationship between benzodiazepine (BZD) treatment in psychiatric outpatients and DSP cases at emergency departments (EDs). We performed a retrospective nested case–control study of psychiatric patients receiving BZD therapy to evaluate the relationship between BZD use and the diagnosis of DSP at EDs using data from the nationwide Taiwan National Health Insurance Research Database. Regression analysis yielded an odds ratio (OR) and 95 % confidence interval (95 % CI) indicating that the use of BZDs in psychiatric outpatients was significantly associated with DSP cases at EDs (OR = 4.46, 95 % CI = 3.59–5.53). Having a history of DSP, sleep disorders, anxiety disorders, schizophrenia, depression, or bipolar disorder was associated with a DSP diagnosis at EDs (OR = 13.27, 95 % CI = 8.28–21.29; OR = 5.04, 95 % CI = 4.25–5.98; OR = 3.95, 95 % CI = 3.32–4.70; OR = 7.80, 95 % CI = 5.28–11.52; OR = 15.20, 95 % CI = 12.22–18.91; and OR = 18.48, 95 % CI = 10.13–33.7, respectively). After adjusting for potential confounders, BZD use remained significantly associated with a subsequent DSP diagnosis (adjusted OR = 2.47, 95 % CI = 1.93–3.17). Patients taking higher average cumulative BZD doses were at greater risk of DSP. Vigilant evaluation of the psychiatric status of patients prescribed with BZD therapy is critical for the prevention of DSP events at EDs.",
"title": ""
},
{
"docid": "neg:1840164_13",
"text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.",
"title": ""
},
{
"docid": "neg:1840164_14",
"text": "Cloud computing with its three key facets (i.e., Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service) and its inherent advantages (e.g., elasticity and scalability) still faces several challenges. The distance between the cloud and the end devices might be an issue for latency-sensitive applications such as disaster management and content delivery applications. Service level agreements (SLAs) may also impose processing at locations where the cloud provider does not have data centers. Fog computing is a novel paradigm to address such issues. It enables provisioning resources and services outside the cloud, at the edge of the network, closer to end devices, or eventually, at locations stipulated by SLAs. Fog computing is not a substitute for cloud computing but a powerful complement. It enables processing at the edge while still offering the possibility to interact with the cloud. This paper presents a comprehensive survey on fog computing. It critically reviews the state of the art in the light of a concise set of evaluation criteria. We cover both the architectures and the algorithms that make fog systems. Challenges and research directions are also introduced. In addition, the lessons learned are reviewed and the prospects are discussed in terms of the key role fog is likely to play in emerging technologies such as tactile Internet.",
"title": ""
},
{
"docid": "neg:1840164_15",
"text": "We investigate learning to probabilistically bypass computations in a network architecture. Our approach is motivated by AIG [44], where layers are conditionally executed depending on their inputs, and the network is trained against a target bypass rate using a per-layer loss. We propose a per-batch loss function, and describe strategies for handling probabilistic bypass during inference as well as training. Per-batch loss allows the network additional flexibility. In particular, a form of mode collapse becomes plausible, where some layers are nearly always bypassed and some almost never; such a configuration is strongly discouraged by AIG’s per-layer loss. We explore several inference-time strategies, including the natural MAP approach. With data-dependent bypass, we demonstrate improved performance over AIG. With data-independent bypass, as in stochastic depth [18], we observe mode collapse and effectively prune layers. We demonstrate our techniques on ResNet-50 and ResNet-101 [11] for ImageNet [3], where our techniques produce improved accuracy (.15–.41% in precision@1) with substantially less computation (bypassing 25–40% of the layers).",
"title": ""
},
{
"docid": "neg:1840164_16",
"text": "The smart grid changes the way how energy and information are exchanged and offers opportunities for incentive-based load balancing. For instance, customers may shift the time of energy consumption of household appliances in exchange for a cheaper energy tariff. This paves the path towards a full range of modular tariffs and dynamic pricing that incorporate the overall grid capacity as well as individual customer demands. This also allows customers to frequently switch within a variety of tariffs from different utility providers based on individual energy consumption and provision forecasts. For automated tariff decisions it is desirable to have a tool that assists in choosing the optimum tariff based on a prediction of individual energy need and production. However, the revelation of individual load patterns for smart grid applications poses severe privacy threats for customers as analyzed in depth in literature. Similarly, accurate and fine-grained regional load forecasts are sensitive business information of utility providers that are not supposed to be released publicly. This paper extends previous work in the domain of privacy-preserving load profile matching where load profiles from utility providers and load profile forecasts from customers are transformed in a distance-preserving embedding in order to find a matching tariff. The embeddings neither reveal individual contributions of customers, nor those of utility providers. Prior work requires a dedicated entity that needs to be trustworthy at least to some extent for determining the matches. In this paper we propose an adaption of this protocol, where we use blockchains and smart contracts for this matching process, instead. Blockchains are gaining widespread adaption in the smart grid domain as a powerful tool for public commitments and accountable calculations. While the use of a decentralized and trust-free blockchain for this protocol comes at the price of some privacy degradation (for which a mitigation is outlined), this drawback is outweighed for it enables verifiability, reliability and transparency. Fabian Knirsch, Andreas Unterweger, Günther Eibl and Dominik Engel Salzburg University of Applied Sciences, Josef Ressel Center for User-Centric Smart Grid Privacy, Security and Control, Urstein Süd 1, 5412 Puch bei Hallein, Austria. e-mail: fabian.knirsch@",
"title": ""
},
{
"docid": "neg:1840164_17",
"text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.",
"title": ""
},
{
"docid": "neg:1840164_18",
"text": "This paper presents the main foundations of big data applied to smart cities. A general Internet of Things based architecture is proposed to be applied to different smart cities applications. We describe two scenarios of big data analysis. One of them illustrates some services implemented in the smart campus of the University of Murcia. The second one is focused on a tram service scenario, where thousands of transit-card transactions should be processed. Results obtained from both scenarios show the potential of the applicability of this kind of techniques to provide profitable services of smart cities, such as the management of the energy consumption and comfort in smart buildings, and the detection of travel profiles in smart transport.",
"title": ""
},
{
"docid": "neg:1840164_19",
"text": "A new smart power switch for industrial, automotive and computer applications developed in BCD (Bipolar, CMOS, DMOS) technology is described. It consists of an on-chip 70 mΩ power DMOS transistor connected in high side configuration and its driver makes the device virtually indestructible and suitable to drive any kind of load with an output current of 2.5 A. If the load is inductive, an internal voltage clamp allows fast demagnetization down to 55 V under the supply voltage. The device includes novel structures for the driver, the fully integrated charge pump circuit and its oscillator. These circuits have specifically been designed to reduce ElectroMagnetic Interference (EMI) thanks to an accurate control of the output voltage slope and the reduction of the output voltage ripple caused by the charge pump itself (several patents pending). An innovative open load circuit allows the detection of the open load condition with high precision (2 to 4 mA within the temperature range and including process spreads). The quiescent current has also been reduced to 600 uA. Diagnostics for CPU feedback is available at the external connections of the chip when the following fault conditions occur: open load; output short circuit to supply voltage; overload or output short circuit to ground; over temperature; under voltage supply.",
"title": ""
}
] |
1840165 | Improving Neural Network Quantization without Retraining using Outlier Channel Splitting | [
{
"docid": "pos:1840165_0",
"text": "In this article, we describe an automatic differentiation module of PyTorch — a library designed to enable rapid research on machine learning models. It builds upon a few projects, most notably Lua Torch, Chainer, and HIPS Autograd [4], and provides a high performance environment with easy access to automatic differentiation of models executed on different devices (CPU and GPU). To make prototyping easier, PyTorch does not follow the symbolic approach used in many other deep learning frameworks, but focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead. Note that this preprint is a draft of certain sections from an upcoming paper covering all PyTorch features.",
"title": ""
},
{
"docid": "pos:1840165_1",
"text": "Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as “WAGE” to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.",
"title": ""
},
{
"docid": "pos:1840165_2",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. In order to reduce this cost, several quantization schemes have gained attention recently with some focusing on weight quantization, and others focusing on quantizing activations. This paper proposes novel techniques that target weight and activation quantizations separately resulting in an overall quantized neural network (QNN). The activation quantization technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. The weight quantization scheme, statistics-aware weight binning (SAWB), finds the optimal scaling factor that minimizes the quantization error based on the statistical characteristics of the distribution of weights without the need for an exhaustive search. The combination of PACT and SAWB results in a 2-bit QNN that achieves state-of-the-art classification accuracy (comparable to full precision networks) across a range of popular models and datasets.",
"title": ""
}
] | [
{
"docid": "neg:1840165_0",
"text": "Online human textual interaction often carries important emotional meanings inaccessible to computers. We propose an approach to textual emotion recognition in the context of computer-mediated communication. The proposed recognition approach works at the sentence level and uses the standard Ekman emotion classification. It is grounded in a refined keyword-spotting method that employs: a WordNet-based word lexicon, a lexicon of emoticons, common abbreviations and colloquialisms, and a set of heuristic rules. The approach is implemented through the Synesketch software system. Synesketch is published as a free, open source software library. Several Synesketch-based applications presented in the paper, such as the the emotional visual chat, stress the practical value of the approach. Finally, the evaluation of the proposed emotion recognition algorithm shows high accuracy and promising results for future research and applications.",
"title": ""
},
{
"docid": "neg:1840165_1",
"text": "We show that Tobin's q, as proxied by the ratio of the firm's market value to its book value, increases with the firm's systematic equity risk and falls with the firm's unsystematic equity risk. Further, an increase in the firm's total equity risk is associated with a fall in q. The negative relation between the change in total risk and the change in q is robust through time for the whole sample, but it does not hold for the largest firms.",
"title": ""
},
{
"docid": "neg:1840165_2",
"text": "With almost daily improvements in capabilities of artificial intelligence it is more important than ever to develop safety software for use by the AI research community. Building on our previous work on AI Containment Problem we propose a number of guidelines which should help AI safety researchers to develop reliable sandboxing software for intelligent programs of all levels. Such safety container software will make it possible to study and analyze intelligent artificial agent while maintaining certain level of safety against information leakage, social engineering attacks and cyberattacks from within the container.",
"title": ""
},
{
"docid": "neg:1840165_3",
"text": "Metamaterials have attracted more and more research attentions recently. Metamaterials for electromagnetic applications consist of sub-wavelength structures designed to exhibit particular responses to an incident EM (electromagnetic) wave. Traditional EM (electromagnetic) metamaterial is constructed from thick and rigid structures, with the form-factor suitable for applications only in higher frequencies (above GHz) in microwave band. In this paper, we developed a thin and flexible metamaterial structure with small-scale unit cell that gives EM metamaterials far greater flexibility in numerous applications. By incorporating ferrite materials, the thickness and size of the unit cell of metamaterials have been effectively scaled down. The design, mechanism and development of flexible ferrite loaded metamaterials for microwave applications is described, with simulation as well as measurements. Experiments show that the ferrite film with permeability of 10 could reduce the resonant frequency. The thickness of the final metamaterials is only 0.3mm. This type of ferrite loaded metamaterials offers opportunities for various sub-GHz microwave applications, such as cloaks, absorbers, and frequency selective surfaces.",
"title": ""
},
{
"docid": "neg:1840165_4",
"text": "The primary task of the peripheral vasculature (PV) is to supply the organs and extremities with blood, which delivers oxygen and nutrients, and to remove metabolic waste products. In addition, peripheral perfusion provides the basis of local immune response, such as wound healing and inflammation, and furthermore plays an important role in the regulation of body temperature. To adequately serve its many purposes, blood flow in the PV needs to be under constant tight regulation, both on a systemic level through nervous and hormonal control, as well as by local factors, such as metabolic tissue demand and hydrodynamic parameters. As a matter of fact, the body does not retain sufficient blood volume to fill the entire vascular space, and only 25% of the capillary bed is in use during resting state. The importance of microvascular control is clearly illustrated by the disastrous effects of uncontrolled blood pooling in the extremities, such as occurring during certain types of shock. Peripheral vascular disease (PVD) is the general name for a host of pathologic conditions of disturbed PV function. Peripheral vascular disease includes occlusive diseases of the arteries and the veins. An example is peripheral arterial occlusive disease (PAOD), which is the result of a buildup of plaque on the inside of the arterial walls, inhibiting proper blood supply to the organs. Symptoms include pain and cramping in extremities, as well as fatigue; ultimately, PAOD threatens limb vitality. The PAOD is often indicative of atherosclerosis of the heart and brain, and is therefore associated with an increased risk of myocardial infarction or cerebrovascular accident (stroke). Venous occlusive disease is the forming of blood clots in the veins, usually in the legs. Clots pose a risk of breaking free and traveling toward the lungs, where they can cause pulmonary embolism. In the legs, thromboses interfere with the functioning of the venous valves, causing blood pooling in the leg (postthrombotic syndrome) that leads to swelling and pain. Other causes of disturbances in peripheral perfusion include pathologies of the autoregulation of the microvasculature, such as in Reynaud’s disease or as a result of diabetes. To monitor vascular function, and to diagnose and monitor PVD, it is important to be able to measure and evaluate basic vascular parameters, such as arterial and venous blood flow, arterial blood pressure, and vascular compliance. Many peripheral vascular parameters can be assessed with invasive or minimally invasive procedures. Examples are the use of arterial catheters for blood pressure monitoring and the use of contrast agents in vascular X ray imaging for the detection of blood clots. Although they are sensitive and accurate, invasive methods tend to be more cumbersome to use, and they generally bear a greater risk of adverse effects compared to noninvasive techniques. These factors, in combination with their usually higher cost, limit the use of invasive techniques as screening tools. Another drawback is their restricted use in clinical research because of ethical considerations. Although many of the drawbacks of invasive techniques are overcome by noninvasive methods, the latter typically are more challenging because they are indirect measures, that is, they rely on external measurements to deduce internal physiologic parameters. Noninvasive techniques often make use of physical and physiologic models, and one has to be mindful of imperfections in the measurements and the models, and their impact on the accuracy of results. Noninvasive methods therefore require careful validation and comparison to accepted, direct measures, which is the reason why these methods typically undergo long development cycles. Even though the genesis of many noninvasive techniques reaches back as far as the late nineteenth century, it was the technological advances of the second half of the twentieth century in such fields as micromechanics, microelectronics, and computing technology that led to the development of practical implementations. The field of noninvasive vascular measurements has undergone a developmental explosion over the last two decades, and it is still very much a field of ongoing research and development. This article describes the most important and most frequently used methods for noninvasive assessment of 234 PERIPHERAL VASCULAR NONINVASIVE MEASUREMENTS",
"title": ""
},
{
"docid": "neg:1840165_5",
"text": "Multiview LSA (MVLSA) is a generalization of Latent Semantic Analysis (LSA) that supports the fusion of arbitrary views of data and relies on Generalized Canonical Correlation Analysis (GCCA). We present an algorithm for fast approximate computation of GCCA, which when coupled with methods for handling missing values, is general enough to approximate some recent algorithms for inducing vector representations of words. Experiments across a comprehensive collection of test-sets show our approach to be competitive with the state of the art.",
"title": ""
},
{
"docid": "neg:1840165_6",
"text": "We present a statistical phrase-based translation model that useshierarchical phrases — phrases that contain subphrases. The model is formally a synchronous context-free grammar but is learned from a bitext without any syntactic information. Thus it can be seen as a shift to the formal machinery of syntaxbased translation systems without any linguistic commitment. In our experiments using BLEU as a metric, the hierarchical phrasebased model achieves a relative improvement of 7.5% over Pharaoh, a state-of-the-art phrase-based system.",
"title": ""
},
{
"docid": "neg:1840165_7",
"text": "This paper presents a demonstration of how AI can be useful in the game design and development process of a modern board game. By using an artificial intelligence algorithm to play a substantial amount of matches of the Ticket to Ride board game and collecting data, we can analyze several features of the gameplay as well as of the game board. Results revealed loopholes in the game’s rules and pointed towards trends in how the game is played. We are then led to the conclusion that large scale simulation utilizing artificial intelligence can offer valuable information regarding modern board games and their designs that would ordinarily be prohibitively expensive or time-consuming to discover manually.",
"title": ""
},
{
"docid": "neg:1840165_8",
"text": "In the early ages of implantable devices, radio frequency (RF) technologies were not commonplace due to the challenges stemming from the inherent nature of biological tissue boundaries. As technology improved and our understanding matured, the benefit of RF in biomedical applications surpassed the implementation challenges and is thus becoming more widespread. The fundamental challenge is due to the significant electromagnetic (EM) effects of the body at high frequencies. The EM absorption and impedance boundaries of biological tissue result in significant reduction of power and signal integrity for transcutaneous propagation of RF fields. Furthermore, the dielectric properties of the body tissue surrounding the implant must be accounted for in the design of its RF components, such as antennas and inductors, and the tissue is often heterogeneous and the properties are highly variable. Additional challenges for implantable applications include the need for miniaturization, power minimization, and often accounting for a conductive casing due to biocompatibility and hermeticity requirements [1]?[3]. Today, wireless technologies are essentially a must have in most electrical implants due to the need to communicate with the device and even transfer usable energy to the implant [4], [5]. Low-frequency wireless technologies face fewer challenges in this implantable setting than its higher frequency, or RF, counterpart, but are limited to much lower communication speeds and typically have a very limited operating distance. The benefits of high-speed communication and much greater communication distances in biomedical applications have spawned numerous wireless standards committees, and the U.S. Federal Communications Commission (FCC) has allocated numerous frequency bands for medical telemetry as well as those to specifically target implantable applications. The development of analytical models, advanced EM simulation software, and representative RF human phantom recipes has significantly facilitated design and optimization of RF components for implantable applications.",
"title": ""
},
{
"docid": "neg:1840165_9",
"text": "Nairovirus, one of five bunyaviral genera, includes seven species. Genomic sequence information is limited for members of the Dera Ghazi Khan, Hughes, Qalyub, Sakhalin, and Thiafora nairovirus species. We used next-generation sequencing and historical virus-culture samples to determine 14 complete and nine coding-complete nairoviral genome sequences to further characterize these species. Previously unsequenced viruses include Abu Mina, Clo Mor, Great Saltee, Hughes, Raza, Sakhalin, Soldado, and Tillamook viruses. In addition, we present genomic sequence information on additional isolates of previously sequenced Avalon, Dugbe, Sapphire II, and Zirqa viruses. Finally, we identify Tunis virus, previously thought to be a phlebovirus, as an isolate of Abu Hammad virus. Phylogenetic analyses indicate the need for reassignment of Sapphire II virus to Dera Ghazi Khan nairovirus and reassignment of Hazara, Tofla, and Nairobi sheep disease viruses to novel species. We also propose new species for the Kasokero group (Kasokero, Leopards Hill, Yogue viruses), the Ketarah group (Gossas, Issyk-kul, Keterah/soft tick viruses) and the Burana group (Wēnzhōu tick virus, Huángpí tick virus 1, Tǎchéng tick virus 1). Our analyses emphasize the sister relationship of nairoviruses and arenaviruses, and indicate that several nairo-like viruses (Shāyáng spider virus 1, Xīnzhōu spider virus, Sānxiá water strider virus 1, South Bay virus, Wǔhàn millipede virus 2) require establishment of novel genera in a larger nairovirus-arenavirus supergroup.",
"title": ""
},
{
"docid": "neg:1840165_10",
"text": "Insulin resistance plays a major role in the pathogenesis of the metabolic syndrome and type 2 diabetes, and yet the mechanisms responsible for it remain poorly understood. Magnetic resonance spectroscopy studies in humans suggest that a defect in insulin-stimulated glucose transport in skeletal muscle is the primary metabolic abnormality in insulin-resistant patients with type 2 diabetes. Fatty acids appear to cause this defect in glucose transport by inhibiting insulin-stimulated tyrosine phosphorylation of insulin receptor substrate-1 (IRS-1) and IRS-1-associated phosphatidylinositol 3-kinase activity. A number of different metabolic abnormalities may increase intramyocellular and intrahepatic fatty acid metabolites; these include increased fat delivery to muscle and liver as a consequence of either excess energy intake or defects in adipocyte fat metabolism, and acquired or inherited defects in mitochondrial fatty acid oxidation. Understanding the molecular and biochemical defects responsible for insulin resistance is beginning to unveil novel therapeutic targets for the treatment of the metabolic syndrome and type 2 diabetes.",
"title": ""
},
{
"docid": "neg:1840165_11",
"text": "In this paper, a 10-bit 0.5V 100 kS/s successive approximation register (SAR) analog-to-digital converter (ADC) with a new fully dynamic rail-to-rail comparator is presented. The proposed comparator enhances the input signal range to the rail-to-rail mode, and hence, improves the signal-to-noise ratio (SNR) of the ADC in low supply voltages. The e®ect of the latch o®set voltage is reduced by providing a higher voltage gain in the regenerative latch. To reduce the ADC power consumption further, the binary-weighted capacitive array with an attenuation capacitor (BWA) is employed as the digital-to-analog converter (DAC) in this design. The ADC is designed and simulated in a 90 nm CMOS process with a single 0.5V power supply. Spectre simulation results show that the average power consumption of the proposed ADC is about 400 nW and the peak signal-to-noise plus distortion ratio (SNDR) is 56 dB. By considering 10% increase in total ADC power consumption due to the parasitics and a loss of 0.22 LSB in ENOB due to the DAC capacitors mismatch, the achieved ̄gure of merit (FoM) is 11.4 fJ/conversion-step.",
"title": ""
},
{
"docid": "neg:1840165_12",
"text": "Workflow management systems will change the architecture of future information systems dramatically. The explicit representation of business procedures is one of the main issues when introducing a workflow management system. In this paper we focus on a class of Petri nets suitable for the representation, validation and verification of these procedures. We will show that the correctness of a procedure represented by such a Petri net can be verified by using standard Petri-net-based techniques. Based on this result we provide a comprehensive set of transformation rules which can be used to construct and modify correct procedures.",
"title": ""
},
{
"docid": "neg:1840165_13",
"text": "Image captioning is a challenging task where the machine automatically describes an image by sentences or phrases. It often requires a large number of paired image-sentence annotations for training. However, a pre-trained captioning model can hardly be applied to a new domain in which some novel object categories exist, i.e., the objects and their description words are unseen during model training. To correctly caption the novel object, it requires professional human workers to annotate the images by sentences with the novel words. It is labor expensive and thus limits its usage in real-world applications. In this paper, we introduce the zero-shot novel object captioning task where the machine generates descriptions without extra training sentences about the novel object. To tackle the challenging problem, we propose a Decoupled Novel Object Captioner (DNOC) framework that can fully decouple the language sequence model from the object descriptions. DNOC has two components. 1) A Sequence Model with the Placeholder (SM-P) generates a sentence containing placeholders. The placeholder represents an unseen novel object. Thus, the sequence model can be decoupled from the novel object descriptions. 2) A key-value object memory built upon the freely available detection model, contains the visual information and the corresponding word for each object. A query generated from the SM-P is used to retrieve the words from the object memory. The placeholder will further be filled with the correct word, resulting in a caption with novel object descriptions. The experimental results on the held-out MSCOCO dataset demonstrate the ability of DNOC in describing novel concepts.",
"title": ""
},
{
"docid": "neg:1840165_14",
"text": "This paper addresses the problem of using unmanned aerial vehicles for the transportation of suspended loads. The proposed solution introduces a novel control law capable of steering the aerial robot to a desired reference while simultaneously limiting the sway of the payload. The stability of the equilibrium is proven rigorously through the application of the nested saturation formalism. Numerical simulations demonstrating the effectiveness of the controller are provided.",
"title": ""
},
{
"docid": "neg:1840165_15",
"text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.",
"title": ""
},
{
"docid": "neg:1840165_16",
"text": "Face recognition has the perception of a solved problem, however when tested at the million-scale exhibits dramatic variation in accuracies across the different algorithms [11]. Are the algorithms very different? Is access to good/big training data their secret weapon? Where should face recognition improve? To address those questions, we created a benchmark, MF2, that requires all algorithms to be trained on same data, and tested at the million scale. MF2 is a public large-scale set with 672K identities and 4.7M photos created with the goal to level playing field for large scale face recognition. We contrast our results with findings from the other two large-scale benchmarks MegaFace Challenge and MS-Celebs-1M where groups were allowed to train on any private/public/big/small set. Some key discoveries: 1) algorithms, trained on MF2, were able to achieve state of the art and comparable results to algorithms trained on massive private sets, 2) some outperformed themselves once trained on MF2, 3) invariance to aging suffers from low accuracies as in MegaFace, identifying the need for larger age variations possibly within identities or adjustment of algorithms in future testing.",
"title": ""
},
{
"docid": "neg:1840165_17",
"text": "The homomorphic encryption problem has been an open one for three decades. Recently, Gentry has proposed a full solution. Subsequent works have made improvements on it. However, the time complexities of these algorithms are still too high for practical use. For example, Gentry’s homomorphic encryption scheme takes more than 900 seconds to add two 32 bit numbers, and more than 67000 seconds to multiply them. In this paper, we develop a non-circuit based symmetric-key homomorphic encryption scheme. It is proven that the security of our encryption scheme is equivalent to the large integer factorization problem, and it can withstand an attack with up to lnpoly chosen plaintexts for any predetermined , where is the security parameter. Multiplication, encryption, and decryption are almost linear in , and addition is linear in . Performance analyses show that our algorithm runs multiplication in 108 milliseconds and addition in a tenth of a millisecond for = 1024 and = 16. We further consider practical multiple-user data-centric applications. Existing homomorphic encryption schemes only consider one master key. To allow multiple users to retrieve data from a server, all users need to have the same key. In this paper, we propose to transform the master encryption key into different user keys and develop a protocol to support correct and secure communication between the users and the server using different user keys. In order to prevent collusion between some user and the server to derive the master key, one or more key agents can be added to mediate the interaction.",
"title": ""
}
] |
1840166 | Compositional Falsification of Cyber-Physical Systems with Machine Learning Components | [
{
"docid": "pos:1840166_0",
"text": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.",
"title": ""
}
] | [
{
"docid": "neg:1840166_0",
"text": "Each human intestine harbours not only hundreds of trillions of bacteria but also bacteriophage particles, viruses, fungi and archaea, which constitute a complex and dynamic ecosystem referred to as the gut microbiota. An increasing number of data obtained during the last 10 years have indicated changes in gut bacterial composition or function in type 2 diabetic patients. Analysis of this ‘dysbiosis’ enables the detection of alterations in specific bacteria, clusters of bacteria or bacterial functions associated with the occurrence or evolution of type 2 diabetes; these bacteria are predominantly involved in the control of inflammation and energy homeostasis. Our review focuses on two key questions: does gut dysbiosis truly play a role in the occurrence of type 2 diabetes, and will recent discoveries linking the gut microbiota to host health be helpful for the development of novel therapeutic approaches for type 2 diabetes? Here we review how pharmacological, surgical and nutritional interventions for type 2 diabetic patients may impact the gut microbiota. Experimental studies in animals are identifying which bacterial metabolites and components act on host immune homeostasis and glucose metabolism, primarily by targeting intestinal cells involved in endocrine and gut barrier functions. We discuss novel approaches (e.g. probiotics, prebiotics and faecal transfer) and the need for research and adequate intervention studies to evaluate the feasibility and relevance of these new therapies for the management of type 2 diabetes.",
"title": ""
},
{
"docid": "neg:1840166_1",
"text": "The rivalry between the cathode-ray tube and flat-panel displays (FPDs) has intensified as performance of some FPDs now exceeds that of that entrenched leader in many cases. Besides the wellknown active-matrix-addressed liquid-crystal display, plasma, organic light-emitting diodes, and liquid-crystal-on-silicon displays are now finding new applications as the manufacturing, process engineering, materials, and cost structures become standardized and suitable for large markets.",
"title": ""
},
{
"docid": "neg:1840166_2",
"text": "Recent years have seen the development of a satellite communication system called a high-throughput satellite (HTS), which enables large-capacity communication to cope with various communication demands. Current HTSs have a fixed allocation of communication resources and cannot flexibly change this allocation during operation. Thus, effectively allocating communication resources for communication demands with a bias is not possible. Therefore, technology is being developed to add flexibility to satellite communication systems, but there is no system analysis model available to quantitatively evaluate the flexibility performance. In this study, we constructed a system analysis model to quantitatively evaluate the flexibility of a satellite communication system and used it to analyze a satellite communication system equipped with a digital channelizer.",
"title": ""
},
{
"docid": "neg:1840166_3",
"text": "The aim of this paper is to explore some, ways of linking ethnographic studies of work in context with the design of CSCW systems. It uses examples from an interdisciplinary collaborative project on air traffic control. Ethnographic methods are introduced, and applied to identifying the social organization of this cooperative work, and the use of instruments within it. On this basis some metaphors for the electronic representation of current manual practices are presented, and their possibilities and limitations are discussed.",
"title": ""
},
{
"docid": "neg:1840166_4",
"text": "Online social networks such as Friendster, MySpace, or the Facebook have experienced exponential growth in membership in recent years. These networks offer attractive means for interaction and communication, but also raise privacy and security concerns. In this study we survey a representative sample of the members of the Facebook (a social network for colleges and high schools) at a US academic institution, and compare the survey data to information retrieved from the network itself. We look for underlying demographic or behavioral differences between the communities of the network’s members and non-members; we analyze the impact of privacy concerns on members’ behavior; we compare members’ stated attitudes with actual behavior; and we document the changes in behavior subsequent to privacy-related information exposure. We find that an individual’s privacy concerns are only a weak predictor of his membership to the network. Also privacy concerned individuals join the network and reveal great amounts of personal information. Some manage their privacy concerns by trusting their ability to control the information they provide and the external access to it. However, we also find evidence of members’ misconceptions about the online community’s actual size and composition, and about the visibility of members’ profiles.",
"title": ""
},
{
"docid": "neg:1840166_5",
"text": "1.1 Equivalent de nitions of a stable distribution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Properties of stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :10 1.3 Symmetric -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :20 1.4 Series representation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 21 1.5 Series representation of skewed -stable random variables : : : : : : : : : : : : : : : : : : : : : : : : 30 1.6 Graphs and tables of -stable densities and c.d.f.'s : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :35 1.7 Simulation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :41 1.8 Exercises : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 49",
"title": ""
},
{
"docid": "neg:1840166_6",
"text": "We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.",
"title": ""
},
{
"docid": "neg:1840166_7",
"text": "Knowledge representation learning (KRL), exploited by various applications such as question answering and information retrieval, aims to embed the entities and relations contained by the knowledge graph into points of a vector space such that the semantic and structure information of the graph is well preserved in the representing space. However, the previous works mainly learned the embedding representations by treating each entity and relation equally which tends to ignore the inherent imbalance and heterogeneous properties existing in knowledge graph. By visualizing the representation results obtained from classic algorithm TransE in detail, we reveal the disadvantages caused by this homogeneous learning strategy and gain insight of designing policy for the homogeneous representation learning. In this paper, we propose a novel margin-based pairwise representation learning framework to be incorporated into many KRL approaches, with the method of introducing adaptivity according to the degree of knowledge heterogeneity. More specially, an adaptive margin appropriate to separate the real samples from fake samples in the embedding space is first proposed based on the sample’s distribution density, and then an adaptive weight is suggested to explicitly address the trade-off between the different contributions coming from the real and fake samples respectively. The experiments show that our Adaptive Weighted Margin Learning (AWML) framework can help the previous work achieve a better performance on real-world Knowledge Graphs Freebase and WordNet in the tasks of both link prediction and triplet classification.",
"title": ""
},
{
"docid": "neg:1840166_8",
"text": "Action recognition is an important research problem of human motion analysis (HMA). In recent years, 3D observation-based action recognition has been receiving increasing interest in the multimedia and computer vision communities, due to the recent advent of cost-effective sensors, such as depth camera Kinect. This work takes this one step further, focusing on early recognition of ongoing 3D human actions, which is beneficial for a large variety of time-critical applications, e.g., gesture-based human machine interaction, somatosensory games, and so forth. Our goal is to infer the class label information of 3D human actions with partial observation of temporally incomplete action executions. By considering 3D action data as multivariate time series (m.t.s.) synchronized to a shared common clock (frames), we propose a stochastic process called dynamic marked point process (DMP) to model the 3D action as temporal dynamic patterns, where both timing and strength information are captured. To achieve even more early and better accuracy of recognition, we also explore the temporal dependency patterns between feature dimensions. A probabilistic suffix tree is constructed to represent sequential patterns among features in terms of the variable-order Markov model (VMM). Our approach and several baselines are evaluated on five 3D human action datasets. Extensive results show that our approach achieves superior performance for early recognition of 3D human actions.",
"title": ""
},
{
"docid": "neg:1840166_9",
"text": "Movement observation and imagery are increasingly propagandized for motor rehabilitation. Both observation and imagery are thought to improve motor function through repeated activation of mental motor representations. However, it is unknown what stimulation parameters or imagery conditions are optimal for rehabilitation purposes. A better understanding of the mechanisms underlying movement observation and imagery is essential for the optimization of functional outcome using these training conditions. This study systematically assessed the corticospinal excitability during rest, observation, imagery and execution of a simple and a complex finger-tapping sequence in healthy controls using transcranial magnetic stimulation (TMS). Observation was conducted passively (without prior instructions) as well as actively (in order to imitate). Imagery was performed visually and kinesthetically. A larger increase in corticospinal excitability was found during active observation in comparison with passive observation and visual or kinesthetic imagery. No significant difference between kinesthetic and visual imagery was found. Overall, the complex task led to a higher corticospinal excitability in comparison with the simple task. In conclusion, the corticospinal excitability was modulated during both movement observation and imagery. Specifically, active observation of a complex motor task resulted in increased corticospinal excitability. Active observation may be more effective than imagery for motor rehabilitation purposes. In addition, the activation of mental motor representations may be optimized by varying task-complexity.",
"title": ""
},
{
"docid": "neg:1840166_10",
"text": "We suggest a new technique to reduce energy consumption in the processor datapath without sacrificing performance by exploiting operand value locality at run time. Data locality is one of the major characteristics of video streams as well as other commonly used applications. We use a cache-like scheme to store a selective history of computation results, and the resultant Te-e-21se leads to power savings. The cache is indexed by the OpeTandS. Based on OUT model, an 8 to 128 entry execution cache TedUCeS power consumption by 20% to 60%.",
"title": ""
},
{
"docid": "neg:1840166_11",
"text": "Mobile phones are increasingly used for security sensitive activities such as online banking or mobile payments. This usually involves some cryptographic operations, and therefore introduces the problem of securely storing the corresponding keys on the phone. In this paper we evaluate the security provided by various options for secure storage of key material on Android, using either Android's service for key storage or the key storage solution in the Bouncy Castle library. The security provided by the key storage service of the Android OS depends on the actual phone, as it may or may not make use of ARM TrustZone features. Therefore we investigate this for different models of phones.\n We find that the hardware-backed version of the Android OS service does offer device binding -- i.e. keys cannot be exported from the device -- though they could be used by any attacker with root access. This last limitation is not surprising, as it is a fundamental limitation of any secure storage service offered from the TrustZone's secure world to the insecure world. Still, some of Android's documentation is a bit misleading here.\n Somewhat to our surprise, we find that in some respects the software-only solution of Bouncy Castle is stronger than the Android OS service using TrustZone's capabilities, in that it can incorporate a user-supplied password to secure access to keys and thus guarantee user consent.",
"title": ""
},
{
"docid": "neg:1840166_12",
"text": "If a theory of concept composition aspires to psychological plausibility, it may first need to address several preliminary issues associated with naturally occurring human concepts: content variability, multiple representational forms, and pragmatic constraints. Not only do these issues constitute a significant challenge for explaining individual concepts, they pose an even more formidable challenge for explaining concept compositions. How do concepts combine as their content changes, as different representational forms become active, and as pragmatic constraints shape processing? Arguably, concepts are most ubiquitous and important in compositions, relative to when they occur in isolation. Furthermore, entering into compositions may play central roles in producing the changes in content, form, and pragmatic relevance observed for individual concepts. Developing a theory of concept composition that embraces and illuminates these issues would not only constitute a significant contribution to the study of concepts, it would provide insight into the nature of human cognition. The human ability to construct and combine concepts is prolific. On the one hand, people acquire tens of thousands of concepts for diverse categories of settings, agents, objects, actions, mental states, bodily states, properties, relations, and so forth. On the other, people combine these concepts to construct infinite numbers of more complex concepts, as the open-ended phrases, sentences, and texts that humans produce effortlessly and ubiquitously illustrate. Major changes in the brain, the emergence of language, and new capacities for social cognition all probably played central roles in the evolution of these impressive conceptual abilities (e.g., Deacon 1997; Donald 1993; Tomasello 2009). In psychology alone, much research addresses human concepts (e.g., Barsalou 2012;Murphy 2002; Smith andMedin 1981) and concept composition (often referred to as conceptual combination; e.g., Costello and Keane 2000; Gagné and Spalding 2014; Hampton 1997; Hampton and Jönsson 2012;Medin and Shoben 1988;Murphy L.W. Barsalou (✉) Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, Scotland e-mail: lawrence.barsalou@glasgow.ac.uk © The Author(s) 2017 J.A. Hampton and Y. Winter (eds.), Compositionality and Concepts in Linguistics and Psychology, Language, Cognition, and Mind 3, DOI 10.1007/978-3-319-45977-6_2 9 1988;Wisniewski 1997;Wu andBarsalou 2009).More generally across the cognitive sciences, much additional research addresses concepts and the broader construct of compositionality (for a recent collection, see Werning et al. 2012). 1 Background Framework A grounded approach to concepts. Here I assume that a concept is a dynamical distributed network in the brain coupled with a category in the environment or experience, with this network guiding situated interactions with the category’s instances (for further detail, see Barsalou 2003b, 2009, 2012, 2016a, 2016b). The concept of bicycle, for example, represents and guides interactions with the category of bicycles in the world. Across interactions with a category’s instances, a concept develops in memory by aggregating information from perception, action, and internal states. Thus, the concept of bicycle develops from aggregating multimodal information related to bicycles across the situations in which they are experienced. As a consequence of using selective attention to extract information relevant to the concept of bicycle from the current situation (e.g., a perceived bicycle), and then using integration mechanisms to integrate it with other bicycle information already in memory, aggregate information for the category develops continually (Barsalou 1999). As described later, however, background situational knowledge is also captured that plays important roles in conceptual processing (Barsalou 2016b, 2003b; Yeh and Barsalou 2006). Although learning plays central roles in establishing concepts, genetic and epigenetic processes constrain the features that can be represented for a concept, and also their integration in the brain’s association areas (e.g., Simmons and Barsalou 2003). For example, biologically-based neural circuits may anticipate the conceptual structure of evolutionarily important concepts, such as agents, minds, animals, foods, and tools. Once the conceptual system is in place, it supports virtually all other forms of cognitive activity, both online in the current situation and offline when representing the world in language, memory, and thought (e.g., Barsalou 2012, 2016a, 2016b). From the perspective developed here, when conceptual knowledge is needed for a task, concepts produce situation-specific simulations of the relevant category dynamically, where a simulation attempts to reenact the kind of neural and bodily states associated with processing the category. On needing conceptual knowledge about bicycles, for example, a small subset of the distributed bicycle network in the brain becomes active to simulate what it would be like to interact with an actual bicycle. This multimodal simulation provides anticipatory inferences about what is likely to be perceived further for the bicycle in the current situation, how to interact with it effectively, and what sorts of internal states might result (Barsalou 2009). The specific bicycle simulation that becomes active is one of infinitely many simulations that could be constructed dynamically from the bicycle network—the entire network never becomes fully active. Typically, simulations remain unconscious, at least to a large extent, while causally influencing cognition, affect, and 10 L.W. Barsalou",
"title": ""
},
{
"docid": "neg:1840166_13",
"text": "Authentication is an important topic in cloud computing security. That is why various authentication techniques in cloud environment are presented in this paper. This process serves as a protection against different sorts of attacks where the goal is to confirm the identity of a user and the user requests services from cloud servers. Multiple authentication technologies have been put forward so far that confirm user identity before giving the permit to access resources. Each of these technologies (username and password, multi-factor authentication, mobile trusted module, public key infrastructure, single sign-on, and biometric authentication) is at first described in here. The different techniques presented will then be compared. Keywords— Cloud computing, security, authentication, access control,",
"title": ""
},
{
"docid": "neg:1840166_14",
"text": "Neuroeconomics seeks to gain a greater understanding of decision making by combining theoretical and methodological principles from the fields of psychology, economics, and neuroscience. Initial studies using this multidisciplinary approach have found evidence suggesting that the brain may be employing multiple levels of processing when making decisions, and this notion is consistent with dual-processing theories that have received extensive theoretical consideration in the field of cognitive psychology, with these theories arguing for the dissociation between automatic and controlled components of processing. While behavioral studies provide compelling support for the distinction between automatic and controlled processing in judgment and decision making, less is known if these components have a corresponding neural substrate, with some researchers arguing that there is no evidence suggesting a distinct neural basis. This chapter will discuss the behavioral evidence supporting the dissociation between automatic and controlled processing in decision making and review recent literature suggesting potential neural systems that may underlie these processes.",
"title": ""
},
{
"docid": "neg:1840166_15",
"text": "The success of deep learning methodologies draws a huge attention to their applications in medical image analysis. One of the applications of deep learning is in segmentation of retinal vessel and severity classification of diabetic retinopathy (DR) from retinal funduscopic image. This paper studies U-Net model performance in segmenting retinal vessel with different settings of dropout and batch normalization and use it to investigate the effect of retina vessel in DR classification. Pre-trained Inception V1 network was used to classify the DR severity. Two sets of retinal images, with and without the presence of vessel, were created from MESSIDOR dataset. The vessel extraction process was done using the best trained U-Net on DRIVE dataset. Final analysis showed that retinal vessel is a good feature in classifying both severe and early cases of DR stage.",
"title": ""
},
{
"docid": "neg:1840166_16",
"text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.",
"title": ""
},
{
"docid": "neg:1840166_17",
"text": "The viewpoint consistency constraint requires that the locations of all object features in an image must be consistent with projection from a single viewpoint. The application of this constraint is central to the problem of achieving robust recognition, since it allows the spatial information in an image to be compared with prior knowledge of an object's shape to the full degree of available image resolution. In addition, the constraint greatly reduces the size of the search space during model-based matching by allowing a few initial matches to provide tight constraints for the locations of other model features. Unfortunately, while simple to state, this constraint has seldom been effectively applied in model-based computer vision systems. This paper reviews the history of attempts to make use of the viewpoint consistency constraint and then describes a number of new techniques for applying it to the process of model-based recognition. A method is presented for probabilistically evaluating new potential matches to extend and refine an initial viewpoint estimate. This evaluation allows the model-based verification process to proceed without the expense of backtracking or search. It will be shown that the effective application of the viewpoint consistency constraint, in conjunction with bottom-up image description based upon principles of perceptual organization, can lead to robust three-dimensional object recognition from single gray-scale images.",
"title": ""
},
{
"docid": "neg:1840166_18",
"text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.",
"title": ""
},
{
"docid": "neg:1840166_19",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
}
] |
1840167 | Urdu text classification | [
{
"docid": "pos:1840167_0",
"text": "The paper discusses various phases in Urdu lexicon development from corpus. First the issues related with Urdu orthography such as optional vocalic content, Unicode variations, name recognition, spelling variation etc. have been described, then corpus acquisition, corpus cleaning, tokenization etc has been discussed and finally Urdu lexicon development i.e. POS tags, features, lemmas, phonemic transcription and the format of the lexicon has been discussed .",
"title": ""
},
{
"docid": "pos:1840167_1",
"text": "This paper develops a theoretical learning model of text classification for Support Vector Machines (SVMs). It connects the statistical properties of text-classification tasks with the generalization performance of a SVM in a quantitative way. Unlike conventional approaches to learning text classifiers, which rely primarily on empirical evidence, this model explains why and when SVMs perform well for text classification. In particular, it addresses the following questions: Why can support vector machines handle the large feature spaces in text classification effectively? How is this related to the statistical properties of text? What are sufficient conditions for applying SVMs to text-classification problems successfully?",
"title": ""
}
] | [
{
"docid": "neg:1840167_0",
"text": "Within the philosophy of language, pragmatics has tended to be seen as an adjunct to, and a means of solving problems in, semantics. A cognitive-scientific conception of pragmatics as a mental processing system responsible for interpreting ostensive communicative stimuli (specifically, verbal utterances) has effected a transformation in the pragmatic issues pursued and the kinds of explanation offered. Taking this latter perspective, I compare two distinct proposals on the kinds of processes, and the architecture of the system(s), responsible for the recovery of speaker meaning (both explicitly and implicitly communicated meaning). 1. Pragmatics as a Cognitive System 1.1 From Philosophy of Language to Cognitive Science Broadly speaking, there are two perspectives on pragmatics: the ‘philosophical’ and the ‘cognitive’. From the philosophical perspective, an interest in pragmatics has been largely motivated by problems and issues in semantics. A familiar instance of this was Grice’s concern to maintain a close semantic parallel between logical operators and their natural language counterparts, such as ‘not’, ‘and’, ‘or’, ‘if’, ‘every’, ‘a/some’, and ‘the’, in the face of what look like quite major divergences in the meaning of the linguistic elements (see Grice 1975, 1981). The explanation he provided was pragmatic, i.e. in terms of what occurs when the logical semantics of these terms is put to rational communicative use. Consider the case of ‘and’: (1) a. Mary went to a movie and Sam read a novel. b. She gave him her key and he opened the door. c. She insulted him and he left the room. While (a) seems to reflect the straightforward truth-functional symmetrical connection, (b) and (c) communicate a stronger asymmetric relation: temporal Many thanks to Richard Breheny, Sam Guttenplan, Corinne Iten, Deirdre Wilson and Vladimir Zegarac for helpful comments and support during the writing of this paper. Address for correspondence: Department of Phonetics & Linguistics, University College London, Gower Street, London WC1E 6BT, UK. Email: robyn linguistics.ucl.ac.uk Mind & Language, Vol. 17 Nos 1 and 2 February/April 2002, pp. 127–148. Blackwell Publishers Ltd. 2002, 108 Cowley Road, Oxford, OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.",
"title": ""
},
{
"docid": "neg:1840167_1",
"text": "Advanced driver assistance systems are the newest addition to vehicular technology. Such systems use a wide array of sensors to provide a superior driving experience. Vehicle safety and driver alert are important parts of these system. This paper proposes a driver alert system to prevent and mitigate adjacent vehicle collisions by proving warning information of on-road vehicles and possible collisions. A dynamic Bayesian network (DBN) is utilized to fuse multiple sensors to provide driver awareness. It detects oncoming adjacent vehicles and gathers ego vehicle motion characteristics using an on-board camera and inertial measurement unit (IMU). A histogram of oriented gradient feature based classifier is used to detect any adjacent vehicles. Vehicles front-rear end and side faces were considered in training the classifier. Ego vehicles heading, speed and acceleration are captured from the IMU and feed into the DBN. The network parameters were learned from data via expectation maximization(EM) algorithm. The DBN is designed to provide two type of warning to the driver, a cautionary warning and a brake alert for possible collision with other vehicles. Experiments were completed on multiple public databases, demonstrating successful warnings and brake alerts in most situations.",
"title": ""
},
{
"docid": "neg:1840167_2",
"text": "Terrestrial Gamma-ray Flashes (TGFs), discovered in 1994 by the Compton Gamma-Ray Observatory, are high-energy photon bursts originating in the Earth’s atmosphere in association with thunderstorms. In this paper, we demonstrate theoretically that, while TGFs pass through the atmosphere, the large quantities of energetic electrons knocked out by collisions between photons and air molecules generate excited species of neutral and ionized molecules, leading to a significant amount of optical emissions. These emissions represent a novel type of transient luminous events in the vicinity of the cloud tops. We show that this predicted phenomenon illuminates a region with a size notably larger than the TGF source and has detectable levels of brightness. Since the spectroscopic, morphological, and temporal features of this luminous event are closely related with TGFs, corresponding measurements would provide a novel perspective for investigation of TGFs, as well as lightning discharges that produce them.",
"title": ""
},
{
"docid": "neg:1840167_3",
"text": "The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning. We present a general-purpose framework for distributed computing environments, CoCoA, that has an efficient communication scheme and is applicable to a wide variety of problems in machine learning and signal processing. We extend the framework to cover general non-strongly-convex regularizers, including L1-regularized problems like lasso, sparse logistic regression, and elastic net regularization, and show how earlier work can be derived as a special case. We provide convergence guarantees for the class of convex regularized loss minimization objectives, leveraging a novel approach in handling non-strongly-convex regularizers and non-smooth loss functions. The resulting framework has markedly improved performance over state-of-the-art methods, as we illustrate with an extensive set of experiments on real distributed datasets.",
"title": ""
},
{
"docid": "neg:1840167_4",
"text": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment.",
"title": ""
},
{
"docid": "neg:1840167_5",
"text": "Creating rich representations of environments requires integration of multiple sensing modalities with complementary characteristics such as range and imaging sensors. To precisely combine multisensory information, the rigid transformation between different sensor coordinate systems (i.e., extrinsic parameters) must be estimated. The majority of existing extrinsic calibration techniques require one or multiple planar calibration patterns (such as checkerboards) to be observed simultaneously from the range and imaging sensors. The main limitation of these approaches is that they require modifying the scene with artificial targets. In this paper, we present a novel algorithm for extrinsically calibrating a range sensor with respect to an image sensor with no requirement of external artificial targets. The proposed method exploits natural linear features in the scene to precisely determine the rigid transformation between the coordinate frames. First, a set of 3D lines (plane intersection and boundary line segments) are extracted from the point cloud, and a set of 2D line segments are extracted from the image. Correspondences between the 3D and 2D line segments are used as inputs to an optimization problem which requires jointly estimating the relative translation and rotation between the coordinate frames. The proposed method is not limited to any particular types or configurations of sensors. To demonstrate robustness, efficiency and generality of the presented algorithm, we include results using various sensor configurations.",
"title": ""
},
{
"docid": "neg:1840167_6",
"text": "This paper presents a model for managing departure aircraft at the spot or gate on the airport surface. The model is applied over two time frames: long term (one hour in future) for collaborative decision making, and short term (immediate) for decisions regarding the release of aircraft. The purpose of the model is to provide the controller a schedule of spot or gate release times optimized for runway utilization. This model was tested in nominal and heavy surface traffic scenarios in a simulated environment, and results indicate average throughput improvement of 10% in high traffic scenarios even with up to two minutes of uncertainty in spot arrival times.",
"title": ""
},
{
"docid": "neg:1840167_7",
"text": "An approach for estimating direction-of-arrival (DoA) based on power output cross-correlation and antenna pattern diversity is proposed for a reactively steerable antenna. An \"estimator condition\" is proposed, from which the most appropriate pattern shape is derived. Computer simulations with directive beam patterns obtained from an electronically steerable parasitic array radiator antenna model are conducted to illustrate the theory and to inspect the method performance with respect to the \"estimator condition\". The simulation results confirm that a good estimation can be expected when suitable directive patterns are chosen. In addition, to verify performance, experiments on estimating DoA are conducted in an anechoic chamber for several angles of arrival and different scenarios of antenna adjustable reactance values. The results show that the proposed method can provide high-precision DoA estimation.",
"title": ""
},
{
"docid": "neg:1840167_8",
"text": "Landscape genetics has seen rapid growth in number of publications since the term was coined in 2003. An extensive literature search from 1998 to 2008 using keywords associated with landscape genetics yielded 655 articles encompassing a vast array of study organisms, study designs and methodology. These publications were screened to identify 174 studies that explicitly incorporated at least one landscape variable with genetic data. We systematically reviewed this set of papers to assess taxonomic and temporal trends in: (i) geographic regions studied; (ii) types of questions addressed; (iii) molecular markers used; (iv) statistical analyses used; and (v) types and nature of spatial data used. Overall, studies have occurred in geographic regions proximal to developed countries and more commonly in terrestrial vs. aquatic habitats. Questions most often focused on effects of barriers and/or landscape variables on gene flow. The most commonly used molecular markers were microsatellites and amplified fragment length polymorphism (AFLPs), with AFLPs used more frequently in plants than animals. Analysis methods were dominated by Mantel and assignment tests. We also assessed differences among journals to evaluate the uniformity of reporting and publication standards. Few studies presented an explicit study design or explicit descriptions of spatial extent. While some landscape variables such as topographic relief affected most species studied, effects were not universal, and some species appeared unaffected by the landscape. Effects of habitat fragmentation were mixed, with some species altering movement paths and others unaffected. Taken together, although some generalities emerged regarding effects of specific landscape variables, results varied, thereby reinforcing the need for species-specific work. We conclude by: highlighting gaps in knowledge and methodology, providing guidelines to authors and reviewers of landscape genetics studies, and suggesting promising future directions of inquiry.",
"title": ""
},
{
"docid": "neg:1840167_9",
"text": "This paper is the first attempt to learn the policy of an inquiry dialog system (IDS) by using deep reinforcement learning (DRL). Most IDS frameworks represent dialog states and dialog acts with logical formulae. In order to make learning inquiry dialog policies more effective, we introduce a logical formula embedding framework based on a recursive neural network. The results of experiments to evaluate the effect of 1) the DRL and 2) the logical formula embedding framework show that the combination of the two are as effective or even better than existing rule-based methods for inquiry dialog policies.",
"title": ""
},
{
"docid": "neg:1840167_10",
"text": "Systemic risk is a key concern for central banks charged with safeguarding overall financial stability. In this paper we investigate how systemic risk is affected by the structure of the financial system. We construct banking systems that are composed of a number of banks that are connected by interbank linkages. We then vary the key parameters that define the structure of the financial system — including its level of capitalisation, the degree to which banks are connected, the size of interbank exposures and the degree of concentration of the system — and analyse the influence of these parameters on the likelihood of contagious (knock-on) defaults. First, we find that the better capitalised banks are, the more resilient is the banking system against contagious defaults and this effect is non-linear. Second, the effect of the degree of connectivity is non-monotonic, that is, initially a small increase in connectivity increases the contagion effect; but after a certain threshold value, connectivity improves the ability of a banking system to absorb shocks. Third, the size of interbank liabilities tends to increase the risk of knock-on default, even if banks hold capital against such exposures. Fourth, more concentrated banking systems are shown to be prone to larger systemic risk, all else equal. In an extension to the main analysis we study how liquidity effects interact with banking structure to produce a greater chance of systemic breakdown. We finally consider how the risk of contagion might depend on the degree of asymmetry (tiering) inherent in the structure of the banking system. A number of our results have important implications for public policy, which this paper also draws out.",
"title": ""
},
{
"docid": "neg:1840167_11",
"text": "The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840167_12",
"text": "This paper presents a new strategy for the active disturbance rejection control (ADRC) of a general uncertain system with unknown bounded disturbance based on a nonlinear sliding mode extended state observer (SMESO). Firstly, a nonlinear extended state observer is synthesized using sliding mode technique for a general uncertain system assuming asymptotic stability. Then the convergence characteristics of the estimation error are analyzed by Lyapunov strategy. It revealed that the proposed SMESO is asymptotically stable and accurately estimates the states of the system in addition to estimating the total disturbance. Then, an ADRC is implemented by using a nonlinear state error feedback (NLSEF) controller; that is suggested by J. Han and the proposed SMESO to control and actively reject the total disturbance of a permanent magnet DC (PMDC) motor. These disturbances caused by the unknown exogenous disturbances and the matched uncertainties of the controlled model. The proposed SMESO is compared with the linear extended state observer (LESO). Through digital simulations using MATLAB / SIMULINK, the chattering phenomenon has been reduced dramatically on the control input channel compared to LESO. Finally, the closed-loop system exhibits a high immunity to torque disturbance and quite robustness to matched uncertainties in the system. Keywords—extended state observer; sliding mode; rejection control; tracking differentiator; DC motor; nonlinear state feedback",
"title": ""
},
{
"docid": "neg:1840167_13",
"text": "Data mining (also known as knowledge discovery from databases) is the process of extraction of hidden, previously unknown and potentially useful information from databases. The outcome of the extracted data can be analyzed for the future planning and development perspectives. In this paper, we have made an attempt to demonstrate how one can extract the local (district) level census, socio-economic and population related other data for knowledge discovery and their analysis using the powerful data mining tool Weka. I. DATA MINING Data mining has been defined as the nontrivial extraction of implicit, previously unknown, and potentially useful information from databases/data warehouses. It uses machine learning, statistical and visualization techniques to discover and present knowledge in a form, which is easily comprehensive to humans [1]. Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help user focus on the most important information in their data warehouses. Data mining tools predict future trends and behaviors, allowing businesses to make proactive, knowledge-driven decisions. The automated, prospective analyses offered by data mining move beyond the analyses of past events provided by retrospective tools typical of decision support systems. Data mining tools can answer business questions that traditionally were too time consuming to resolve. They scour databases for hidden patterns, finding predictive information that experts may miss because it lies outside their expectations. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line [2]. Data mining steps in the knowledge discovery process are as follows: 1. Data cleaningThe removal of noise and inconsistent data. 2. Data integration The combination of multiple sources of data. 3. Data selection The data relevant for analysis is retrieved from the database. 4. Data transformation The consolidation and transformation of data into forms appropriate for mining. 5. Data mining The use of intelligent methods to extract patterns from data. 6. Pattern evaluation Identification of patterns that are interesting. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 36 7. Knowledge presentation Visualization and knowledge representation techniques are used to present the extracted or mined knowledge to the end user [3]. The actual data mining task is the automatic or semi-automatic analysis of large quantities of data to extract previously unknown interesting patterns such as groups of data records (cluster analysis), unusual records (anomaly detection) and dependencies (association rule mining). This usually involves using database techniques such as spatial indices. These patterns can then be seen as a kind of summary of the input data, and may be used in further analysis or, for example, in machine learning and predictive analytics. For example, the data mining step might identify multiple groups in the data, which can then be used to obtain more accurate prediction results by a decision support system. Neither the data collection, data preparation, nor result interpretation and reporting are part of the data mining step, but do belong to the overall KDD process as additional steps [7][8]. II. WEKA: Weka (Waikato Environment for Knowledge Analysis) is a popular suite of machine learning software written in Java, developed at the University of Waikato, New Zealand. Weka is free software available under the GNU General Public License. The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modeling, together with graphical user interfaces for easy access to this functionality [4]. Weka is a collection of machine learning algorithms for solving real-world data mining problems. It is written in Java and runs on almost any platform. The algorithms can either be applied directly to a dataset or called from your own Java code [5]. The original non-Java version of Weka was a TCL/TK front-end to (mostly third-party) modeling algorithms implemented in other programming languages, plus data preprocessing utilities in C, and a Makefile-based system for running machine learning experiments. This original version was primarily designed as a tool for analyzing data from agricultural domains, but the more recent fully Java-based version (Weka 3), for which development started in 1997, is now used in many different application areas, in particular for educational purposes and research. Advantages of Weka include: I. Free availability under the GNU General Public License II. Portability, since it is fully implemented in the Java programming language and thus runs on almost any modern computing platform III. A comprehensive collection of data preprocessing and modeling techniques IV. Ease of use due to its graphical user interfaces Weka supports several standard data mining tasks, more specifically, data preprocessing, clustering, classification, regression, visualization, and feature selection [10]. All of Weka's techniques are predicated on the assumption that the data is available as a single flat file or relation, where each data point is described by a fixed number of attributes (normally, numeric or nominal attributes, but some other attribute types are also supported). Weka provides access to SQL databases using Java Database Connectivity and can process the result returned by a database query. It is not capable of multi-relational data mining, but there is separate software for converting a collection of linked database tables into a single table that is suitable for (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 37 processing using Weka. Another important area that is currently not covered by the algorithms included in the Weka distribution is sequence modeling [4]. III. DATA PROCESSING, METHODOLOGY AND RESULTS The primary available data such as census (2001), socio-economic data, and few basic information of Latur district are collected from National Informatics Centre (NIC), Latur, which is mainly required to design and develop the database for Latur district of Maharashtra state of India. The database is designed in MS-Access 2003 database management system to store the collected data. The data is formed according to the required format and structures. Further, the data is converted to ARFF (Attribute Relation File Format) format to process in WEKA. An ARFF file is an ASCII text file that describes a list of instances sharing a set of attributes. ARFF files were developed by the Machine Learning Project at the Department of Computer Science of The University of Waikato for use with the Weka machine learning software. This document descibes the version of ARFF used with Weka versions 3.2 to 3.3; this is an extension of the ARFF format as described in the data mining book written by Ian H. Witten and Eibe Frank [6][9]. After processing the ARFF file in WEKA the list of all attributes, statistics and other parameters can be utilized as shown in Figure 1. Fig.1 Processed ARFF file in WEKA. In the above shown file, there are 729 villages data is processed with different attributes (25) like population, health, literacy, village locations etc. Among all these, few of them are preprocessed attributes generated by census data like, percent_male_literacy, total_percent_literacy, total_percent_illiteracy, sex_ratio etc. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 38 The processed data in Weka can be analyzed using different data mining techniques like, Classification, Clustering, Association rule mining, Visualization etc. algorithms. The Figure 2 shows the few processed attributes which are visualized into a 2 dimensional graphical representation. Fig. 2 Graphical visualization of processed attributes. The information can be extracted with respect to two or more associative relation of data set. In this process, we have made an attempt to visualize the impact of male and female literacy on the gender inequality. The literacy related and population data is processed and computed the percent wise male and female literacy. Accordingly we have computed the sex ratio attribute from the given male and female population data. The new attributes like, male_percent_literacy, female_percent_literacy and sex_ratio are compared each other to extract the impact of literacy on gender inequality. The Figure 3 and Figure 4 are the extracted results of sex ratio values with male and female literacy. Fig. 3 Female literacy and Sex ratio values. (ICETSTM – 2013) International Conference in “Emerging Trends in Science, Technology and Management-2013, Singapore Census Data Mining and Data Analysis using WEKA 39 Fig. 4 Male literacy and Sex ratio values. On the Y-axis, the female percent literacy values are shown in Figure 3, and the male percent literacy values are shown in Figure 4. By considering both the results, the female percent literacy is poor than the male percent literacy in the district. The sex ratio values are higher in male percent literacy than the female percent literacy. The results are purely showing that the literacy is very much important to manage the gender inequality of any region. ACKNOWLEDGEMENT: Authors are grateful to the department of NIC, Latur for providing all the basic data and WEKA for providing such a strong tool to extract and analyze knowledge from database. CONCLUSION Knowledge extraction from database is becom",
"title": ""
},
{
"docid": "neg:1840167_14",
"text": "In this brief, we propose a variable structure based nonlinear missile guidance/autopilot system with highly maneuverable actuators, mainly consisting of thrust vector control and divert control system, for the task of intercepting of a theater ballistic missile. The aim of the present work is to achieve bounded target interception under the mentioned 5 degree-of-freedom (DOF) control such that the distance between the missile and the target will enter the range of triggering the missile's explosion. First, a 3-DOF sliding-mode guidance law of the missile considering external disturbances and zero-effort-miss (ZEM) is designed to minimize the distance between the center of the missile and that of the target. Next, a quaternion-based sliding-mode attitude controller is developed to track the attitude command while coping with variation of missile's inertia and uncertain aerodynamic force/wind gusts. The stability of the overall system and ZEM-phase convergence are analyzed thoroughly via Lyapunov stability theory. Extensive simulation results are obtained to validate the effectiveness of the proposed integrated guidance/autopilot system by use of the 5-DOF inputs.",
"title": ""
},
{
"docid": "neg:1840167_15",
"text": "Dear Editor: We read carefully and with great interest the anatomic study performed by Lilyquist et al. They performed an interesting study of the tibiofibular syndesmosis using a 3-dimensional method that can be of help when performing anatomic studies. As the authors report in the study, a controversy exists regarding the anatomic structures of the syndesmosis, and a huge confusion can be observed when reading the related literature. However, anatomic confusion between the inferior transverse ligament and the intermalleolar ligament is present in the manuscript: the intermalleolar ligament is erroneously identified as the “inferior” transverse ligament. The transverse ligament is the name that receives the deep component of the posterior tibiofibular ligament. The posterior tibiofibular ligament is a ligament located in the posterior aspect of the ankle that joins the distal epiphysis of tibia and fibula; it is formed by 2 fascicles, one superficial and one deep. The deep fascicle or transverse ligament is difficult to see from a posterior ankle view, but easily from a plantar view of the tibiofibular syndesmosis (Figure 1). Instead, the intermalleolar ligament is a thickening of the posterior ankle joint capsule, located between the posterior talofibular ligament and the transverse ligament. It originates from the medial facet of the lateral malleolus and directs medially to tibia and talus (Figure 2). The intermalleolar ligament was observed in 100% of the specimens by Golanó et al in contrast with 70% in Lilyquist’s study. On the other hand, structures of the ankle syndesmosis have not been named according to the International Anatomical Terminology (IAT). In 1955, the VI Federative International Congress of Anatomy accorded to eliminate eponyms from the IAT. Because of this measure, the Chaput, Wagstaff, or Volkman tubercles used in the manuscript should be eliminated in order to avoid increasing confusion. Lilyquist et al also defined the tibiofibular syndesmosis as being formed by the anterior inferior tibiofibular ligament, the posterior inferior tibiofibular ligament, the interosseous ligament, and the inferior transverse ligament. The anterior inferior tibiofibular ligament and posterior inferior tibiofibular ligament of the tibiofibular syndesmosis (or inferior tibiofibular joint) should be referred to as the anterior tibiofibular ligament and posterior tibiofibular ligament. The reason why it is not necessary to use “inferior” in its description is that the ligaments of the superior tibiofibular joint are the anterior ligament of the fibular head and the posterior ligament of the fibular head, not the “anterior superior tibiofibular ligament” and “posterior superior tibiofibular ligament.” The ankle syndesmosis is one of the areas of the human body where chronic anatomic errors exist: the transverse ligament (deep component of the posterior tibiofibular ligament), the anterior tibiofibular ligament (“anterior 689614 FAIXXX10.1177/1071100716689614Foot & Ankle InternationalLetter to the Editor letter2017",
"title": ""
},
{
"docid": "neg:1840167_16",
"text": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.",
"title": ""
},
{
"docid": "neg:1840167_17",
"text": "OBJECTIVE\nTo determine the formation and dissolution of calcium fluoride on the enamel surface after application of two fluoride gel-saliva mixtures.\n\n\nMETHOD AND MATERIALS\nFrom each of 80 bovine incisors, two enamel specimens were prepared and subjected to two different treatment procedures. In group 1, 80 specimens were treated with a mixture of an amine fluoride gel (1.25% F-; pH 5.2; 5 minutes) and human saliva. In group 2, 80 enamel blocks were subjected to a mixture of sodium fluoride gel (1.25% F; pH 5.5; 5 minutes) and human saliva. Subsequent to fluoride treatment, 40 specimens from each group were stored in human saliva and sterile water, respectively. Ten specimens were removed after each of 1 hour, 24 hours, 2 days, and 5 days and analyzed according to potassium hydroxide-soluble fluoride.\n\n\nRESULTS\nApplication of amine fluoride gel resulted in a higher amount of potassium hydroxide-soluble fluoride than did sodium fluoride gel 1 hour after application. Saliva exerted an inhibitory effect according to the dissolution rate of calcium fluoride. However, after 5 days, more than 90% of the precipitated calcium fluoride was dissolved in the amine fluoride group, and almost all potassium hydroxide-soluble fluoride was lost in the sodium fluoride group. Calcium fluoride apparently dissolves rapidly, even at almost neutral pH.\n\n\nCONCLUSION\nConsidering the limitations of an in vitro study, it is concluded that highly concentrated fluoride gels should be applied at an adequate frequency to reestablish a calcium fluoride-like layer.",
"title": ""
}
] |
1840168 | A Supervised Patch-Based Approach for Human Brain Labeling | [
{
"docid": "pos:1840168_0",
"text": "Regions in three-dimensional magnetic resonance (MR) brain images can be classified using protocols for manually segmenting and labeling structures. For large cohorts, time and expertise requirements make this approach impractical. To achieve automation, an individual segmentation can be propagated to another individual using an anatomical correspondence estimate relating the atlas image to the target image. The accuracy of the resulting target labeling has been limited but can potentially be improved by combining multiple segmentations using decision fusion. We studied segmentation propagation and decision fusion on 30 normal brain MR images, which had been manually segmented into 67 structures. Correspondence estimates were established by nonrigid registration using free-form deformations. Both direct label propagation and an indirect approach were tested. Individual propagations showed an average similarity index (SI) of 0.754+/-0.016 against manual segmentations. Decision fusion using 29 input segmentations increased SI to 0.836+/-0.009. For indirect propagation of a single source via 27 intermediate images, SI was 0.779+/-0.013. We also studied the effect of the decision fusion procedure using a numerical simulation with synthetic input data. The results helped to formulate a model that predicts the quality improvement of fused brain segmentations based on the number of individual propagated segmentations combined. We demonstrate a practicable procedure that exceeds the accuracy of previous automatic methods and can compete with manual delineations.",
"title": ""
},
{
"docid": "pos:1840168_1",
"text": "We present a technique for automatically assigning a neuroanatomical label to each voxel in an MRI volume based on probabilistic information automatically estimated from a manually labeled training set. In contrast to existing segmentation procedures that only label a small number of tissue classes, the current method assigns one of 37 labels to each voxel, including left and right caudate, putamen, pallidum, thalamus, lateral ventricles, hippocampus, and amygdala. The classification technique employs a registration procedure that is robust to anatomical variability, including the ventricular enlargement typically associated with neurological diseases and aging. The technique is shown to be comparable in accuracy to manual labeling, and of sufficient sensitivity to robustly detect changes in the volume of noncortical structures that presage the onset of probable Alzheimer's disease.",
"title": ""
}
] | [
{
"docid": "neg:1840168_0",
"text": "ARTICLE The healing properties of compassion have been written about for centuries. The Dalai Lama often stresses that if you want others to be happy – focus on compassion; if you want to be happy yourself – focus on compassion (Dalai Lama 1995, 2001). Although all clinicians agree that compassion is central to the doctor–patient and therapist–client relationship, recently the components of com passion have been looked at through the lens of Western psychological science and research 2003a,b). Compassion can be thought of as a skill that one can train in, with in creasing evidence that focusing on and practising com passion can influence neurophysiological and immune systems (Davidson 2003; Lutz 2008). Compassionfocused therapy refers to the under pinning theory and process of applying a compassion model to psy chotherapy. Compassionate mind training refers to specific activities designed to develop compassion ate attributes and skills, particularly those that influence affect regula tion. Compassionfocused therapy adopts the philosophy that our under standing of psychological and neurophysiological processes is developing at such a rapid pace that we are now moving beyond 'schools of psychotherapy' towards a more integrated, biopsycho social science of psycho therapy (Gilbert 2009). Compassionfocused therapy and compassionate mind training arose from a number of observations. First, people with high levels of shame and self criticism can have enormous difficulty in being kind to themselves, feeling selfwarmth or being selfcompassionate. Second, it has long been known that problems of shame and selfcriticism are often rooted in histories of abuse, bullying, high expressed emo tion in the family, neglect and/or lack of affection Individuals subjected to early experiences of this type can become highly sensitive to threats of rejection or criticism from the outside world and can quickly become selfattacking: they experience both their external and internal worlds as easily turning hostile. Third, it has been recognised that working with shame and selfcriticism requires a thera peutic focus on memories of such early experiences And fourth, there are clients who engage with the cognitive and behavioural tasks of a therapy, and become skilled at generating (say) alternatives for their negative thoughts and beliefs, but who still do poorly in therapy (Rector 2000). They are likely to say, 'I understand the logic of my alterna tive thinking but it doesn't really help me feel much better' or 'I know I'm not to blame for the abuse but I still feel that I …",
"title": ""
},
{
"docid": "neg:1840168_1",
"text": "We introduce a framework for feature selection based on depe ndence maximization between the selected features and the labels of an estimation problem, u sing the Hilbert-Schmidt Independence Criterion. The key idea is that good features should be highl y dependent on the labels. Our approach leads to a greedy procedure for feature selection. We show that a number of existing feature selectors are special cases of this framework. Experiments on both artificial and real-world data show that our feature selector works well in practice.",
"title": ""
},
{
"docid": "neg:1840168_2",
"text": "We consider practical methods for adding constraints to the K-Means clustering algorithm in order to avoid local solutions with empty clusters or clusters having very few points. We often observe this phenomena when applying K-Means to datasets where the number of dimensions is n 10 and the number of desired clusters is k 20. We propose explicitly adding k constraints to the underlying clustering optimization problem requiring that each cluster have at least a minimum number of points in it. We then investigate the resulting cluster assignment step. Preliminary numerical tests on real datasets indicate the constrained approach is less prone to poor local solutions, producing a better summary of the underlying data. Contrained K-Means Clustering 1",
"title": ""
},
{
"docid": "neg:1840168_3",
"text": "Networks represent relationships between entities in many complex systems, spanning from online social interactions to biological cell development and brain connectivity. In many cases, relationships between entities are unambiguously known: are two users “friends” in a social network? Do two researchers collaborate on a published article? Do two road segments in a transportation system intersect? These are directly observable in the system in question. In most cases, relationships between nodes are not directly observable and must be inferred: Does one gene regulate the expression of another? Do two animals who physically co-locate have a social bond? Who infected whom in a disease outbreak in a population?\n Existing approaches for inferring networks from data are found across many application domains and use specialized knowledge to infer and measure the quality of inferred network for a specific task or hypothesis. However, current research lacks a rigorous methodology that employs standard statistical validation on inferred models. In this survey, we examine (1) how network representations are constructed from underlying data, (2) the variety of questions and tasks on these representations over several domains, and (3) validation strategies for measuring the inferred network’s capability of answering questions on the system of interest.",
"title": ""
},
{
"docid": "neg:1840168_4",
"text": "The mid 20 century witnessed some serious attempts in studies of play and games with an emphasis on their importance within culture. Most prominently, Johan Huizinga (1944) maintained in his book Homo Ludens that the earliest stage of culture is in the form of play and that culture proceeds in the shape and the mood of play. He also claimed that some elements of play crystallised as knowledge such as folklore, poetry and philosophy as culture advanced.",
"title": ""
},
{
"docid": "neg:1840168_5",
"text": "The Internet of things(IoT) has brought the vision of the smarter world into reality and including healthcare it has a many application domains. The convergence of IOT-cloud can play a significant role in the smart healthcare by offering better insight of healthcare content to support affordable and quality patient care. In this paper, we proposed a model that allows the sensor to monitor the patient's symptom. The collected monitored data transmitted to the gateway via Bluetooth and then to the cloud server through docker container using the internet. Thus enabling the physician to diagnose and monitor health problems wherever the patient is. Also, we address the several challenges related to health monitoring and management using IoT.",
"title": ""
},
{
"docid": "neg:1840168_6",
"text": "Database consolidation is gaining wide acceptance as a means to reduce the cost and complexity of managing database systems. However, this new trend poses many interesting challenges for understanding and predicting system performance. The consolidated databases in multi-tenant settings share resources and compete with each other for these resources. In this work we present an experimental study to highlight how these interactions can be fairly complex. We argue that individual database staging or workload profiling is not an adequate approach to understanding the performance of the consolidated system. Our initial investigations suggest that machine learning approaches that use monitored data to model the system can work well for important tasks.",
"title": ""
},
{
"docid": "neg:1840168_7",
"text": "One glaring weakness of Java for numerical programming is its lack of support for complex numbers. Simply creating a Complex number class leads to poor performance relative to Fortran. We show in this paper, however, that the combination of such aComplex class and a compiler that understands its semantics does indeed lead to Fortran-like performance. This performance gain is achieved while leaving the Java language completely unchanged and maintaining full compatibility with existing Java Virtual Machines . We quantify the effectiveness of our approach through experiments with linear algebra, electromagnetics, and computational fluid-dynamics kernels.",
"title": ""
},
{
"docid": "neg:1840168_8",
"text": "The lack of reliability of gliding contacts in highly constrained environments induces manufacturers to develop contactless transmission power systems such as rotary transformers. The following paper proposes an optimal design methodology for rotary transformers supplied from a low-voltage source at high temperatures. The method is based on an accurate multidisciplinary analysis model divided into magnetic, thermal and electrical parts, optimized thanks to a sequential quadratic programming method. The technique is used to discuss the design particularities of rotary transformers. Two optimally designed structures of rotary transformers : an iron silicon coaxial one and a ferrite pot core one, are compared.",
"title": ""
},
{
"docid": "neg:1840168_9",
"text": "Hoon Sohn Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C926 Los Alamos National Laboratory, Los Alamos, NM 87545 e-mail: sohn@lanl.gov Charles R. Farrar Engineering Sciences & Applications Division, Engineering Analysis Group, M/S C946 e-mail: farrar@lanl.gov Norman F. Hunter Engineering Sciences & Applications Division, Measurement Technology Group, M/S C931 e-mail: hunter@lanl.gov Keith Worden Department of Mechanical Engineering University of Sheffield Mappin St. Sheffield S1 3JD, United Kingdom e-mail: k.worden@sheffield.ac.uk",
"title": ""
},
{
"docid": "neg:1840168_10",
"text": "Street View serves millions of Google users daily with panoramic imagery captured in hundreds of cities in 20 countries across four continents. A team of Google researchers describes the technical challenges involved in capturing, processing, and serving street-level imagery on a global scale.",
"title": ""
},
{
"docid": "neg:1840168_11",
"text": "5.1. Detection Formats 475 5.2. Food Quality and Safety Analysis 477 5.2.1. Pathogens 477 5.2.2. Toxins 479 5.2.3. Veterinary Drugs 479 5.2.4. Vitamins 480 5.2.5. Hormones 480 5.2.6. Diagnostic Antibodies 480 5.2.7. Allergens 481 5.2.8. Proteins 481 5.2.9. Chemical Contaminants 481 5.3. Medical Diagnostics 481 5.3.1. Cancer Markers 481 5.3.2. Antibodies against Viral Pathogens 482 5.3.3. Drugs and Drug-Induced Antibodies 483 5.3.4. Hormones 483 5.3.5. Allergy Markers 483 5.3.6. Heart Attack Markers 484 5.3.7. Other Molecular Biomarkers 484 5.4. Environmental Monitoring 484 5.4.1. Pesticides 484 5.4.2. 2,4,6-Trinitrotoluene (TNT) 485 5.4.3. Aromatic Hydrocarbons 485 5.4.4. Heavy Metals 485 5.4.5. Phenols 485 5.4.6. Polychlorinated Biphenyls 487 5.4.7. Dioxins 487 5.5. Summary 488 6. Conclusions 489 7. Abbreviations 489 8. Acknowledgment 489 9. References 489",
"title": ""
},
{
"docid": "neg:1840168_12",
"text": "Today's smartphones are equipped with precise motion sensors like accelerometer and gyroscope, which can measure tiny motion and rotation of devices. While they make mobile applications more functional, they also bring risks of leaking users' privacy. Researchers have found that tap locations on screen can be roughly inferred from motion data of the device. They mostly utilized this side-channel for inferring short input like PIN numbers and passwords, with repeated attempts to boost accuracy. In this work, we study further for longer input inference, such as chat record and e-mail content, anything a user ever typed on a soft keyboard. Since people increasingly rely on smartphones for daily activities, their inputs directly or indirectly expose privacy about them. Thus, it is a serious threat if their input text is leaked.\n To make our attack practical, we utilize the shared memory side-channel for detecting window events and tap events of a soft keyboard. The up or down state of the keyboard helps triggering our Trojan service for collecting accelerometer and gyroscope data. Machine learning algorithms are used to roughly predict the input text from the raw data and language models are used to further correct the wrong predictions. We performed experiments on two real-life scenarios, which were writing emails and posting Twitter messages, both through mobile clients. Based on the experiments, we show the feasibility of inferring long user inputs to readable sentences from motion sensor data. By applying text mining technology on the inferred text, more sensitive information about the device owners can be exposed.",
"title": ""
},
{
"docid": "neg:1840168_13",
"text": "This paper presents a new class of dual-, tri- and quad-band BPF by using proposed open stub-loaded shorted stepped-impedance resonator (OSLSSIR). The OSLSSIR consists of a two-end-shorted three-section stepped-impedance resistor (SIR) with two identical open stubs loaded at its impedance junctions. Two 50- Ω tapped lines are directly connected to two shorted sections of the SIR to serve as I/O ports. As the electrical lengths of two identical open stubs increase, many more transmission poles (TPs) and transmission zeros (TZs) can be shifted or excited within the interested frequency range. The TZs introduced by open stubs divide the TPs into multiple groups, which can be applied to design a multiple-band bandpass filter (BPF). In order to increase many more design freedoms for tuning filter performance, a high-impedance open stub and the narrow/broad side coupling are introduced as perturbations in all filters design, which can tune the even- and odd-mode TPs separately. In addition, two branches of I/O coupling and open stub-loaded shorted microstrip line are employed in tri- and quad-band BPF design. As examples, two dual-wideband BPFs, one tri-band BPF, and one quad-band BPF have been successfully developed. The fabricated four BPFs have merits of compact sizes, low insertion losses, and high band-to-band isolations. The measured results are in good agreement with the full-wave simulated results.",
"title": ""
},
{
"docid": "neg:1840168_14",
"text": "The goal of this paper is the automatic identification of characters in TV and feature film material. In contrast to standard approaches to this task, which rely on the weak supervision afforded by transcripts and subtitles, we propose a new method requiring only a cast list. This list is used to obtain images of actors from freely available sources on the web, providing a form of partial supervision for this task. In using images of actors to recognize characters, we make the following three contributions: (i) We demonstrate that an automated semi-supervised learning approach is able to adapt from the actor’s face to the character’s face, including the face context of the hair; (ii) By building voice models for every character, we provide a bridge between frontal faces (for which there is plenty of actor-level supervision) and profile (for which there is very little or none); and (iii) by combining face context and speaker identification, we are able to identify characters with partially occluded faces and extreme facial poses. Results are presented on the TV series ‘Sherlock’ and the feature film ‘Casablanca’. We achieve the state-of-the-art on the Casablanca benchmark, surpassing previous methods that have used the stronger supervision available from transcripts.",
"title": ""
},
{
"docid": "neg:1840168_15",
"text": "The implementation of effective strategies to manage leaks represents an essential goal for all utilities involved with drinking water supply in order to reduce water losses affecting urban distribution networks. This study concerns the early detection of leaks occurring in small-diameter customers’ connections to water supply networks. An experimental campaign was carried out in a test bed to investigate the sensitivity of Acoustic Emission (AE) monitoring to water leaks. Damages were artificially induced on a polyethylene pipe (length 28 m, outer diameter 32 mm) at different distances from an AE transducer. Measurements were performed in both unburied and buried pipe conditions. The analysis permitted the identification of a clear correlation between three monitored parameters (namely total Hits, Cumulative Counts and Cumulative Amplitude) and the characteristics of the examined leaks.",
"title": ""
},
{
"docid": "neg:1840168_16",
"text": "With the current set of design tools and methods available to game designers, vast portions of the space of possible games are not currently reachable. In the past, technological advances such as improved graphics and new controllers have driven the creation of new forms of gameplay, but games have still not made great strides into new gameplay experiences. We argue that the development of innovative artificial intelligence (AI) systems plays a crucial role in the exploration of currently unreachable spaces. To aid in exploration, we suggest a practice called AI-based game design, an iterative design process that deeply integrates the affordances of an AI system within the context of game design. We have applied this process in our own projects, and in this paper we present how it has pushed the boundaries of current game genres and experiences, as well as discuss the future AI-based game design.",
"title": ""
},
{
"docid": "neg:1840168_17",
"text": "Following the increasing popularity of mobile ecosystems, cybercriminals have increasingly targeted them, designing and distributing malicious apps that steal information or cause harm to the device’s owner. Aiming to counter them, detection techniques based on either static or dynamic analysis that model Android malware, have been proposed. While the pros and cons of these analysis techniques are known, they are usually compared in the context of their limitations e.g., static analysis is not able to capture runtime behaviors, full code coverage is usually not achieved during dynamic analysis, etc. Whereas, in this paper, we analyze the performance of static and dynamic analysis methods in the detection of Android malware and attempt to compare them in terms of their detection performance, using the same modeling approach. To this end, we build on MAMADROID, a state-of-the-art detection system that relies on static analysis to create a behavioral model from the sequences of abstracted API calls. Then, aiming to apply the same technique in a dynamic analysis setting, we modify CHIMP, a platform recently proposed to crowdsource human inputs for app testing, in order to extract API calls’ sequences from the traces produced while executing the app on a CHIMP virtual device. We call this system AUNTIEDROID and instantiate it by using both automated (Monkey) and user-generated inputs. We find that combining both static and dynamic analysis yields the best performance, with F -measure reaching 0.92. We also show that static analysis is at least as effective as dynamic analysis, depending on how apps are stimulated during execution, and, finally, investigate the reasons for inconsistent misclassifications across methods.",
"title": ""
},
{
"docid": "neg:1840168_18",
"text": "Cellulosic plant material represents an as-of-yet untapped source of fermentable sugars for significant industrial use. Many physio-chemical structural and compositional factors hinder the enzymatic digestibility of cellulose present in lignocellulosic biomass. The goal of any pretreatment technology is to alter or remove structural and compositional impediments to hydrolysis in order to improve the rate of enzyme hydrolysis and increase yields of fermentable sugars from cellulose or hemicellulose. These methods cause physical and/or chemical changes in the plant biomass in order to achieve this result. Experimental investigation of physical changes and chemical reactions that occur during pretreatment is required for the development of effective and mechanistic models that can be used for the rational design of pretreatment processes. Furthermore, pretreatment processing conditions must be tailored to the specific chemical and structural composition of the various, and variable, sources of lignocellulosic biomass. This paper reviews process parameters and their fundamental modes of action for promising pretreatment methods.",
"title": ""
},
{
"docid": "neg:1840168_19",
"text": "The purpose of this paper is to discover a semi-optimal set of trading rules and to investigate its effectiveness as applied to Egyptian Stocks. The aim is to mix different categories of technical trading rules and let an automatic evolution process decide which rules are to be used for particular time series. This difficult task can be achieved by using genetic algorithms (GA's), they permit the creation of artificial experts taking their decisions from an optimal subset of the a given set of trading rules. The GA's based on the survival of the fittest, do not guarantee a global optimum but they are known to constitute an effective approach in optimizing non-linear functions. Selected liquid stocks are tested and GA trading rules were compared with other conventional and well known technical analysis rules. The Proposed GA system showed clear better average profit and in the same high sharpe ratio, which indicates not only good profitability but also better risk-reward trade-off",
"title": ""
}
] |
1840169 | Towards View-point Invariant Person Re-identification via Fusion of Anthropometric and Gait Features from Kinect Measurements | [
{
"docid": "pos:1840169_0",
"text": "Human gait is an important indicator of health, with applications ranging from diagnosis, monitoring, and rehabilitation. In practice, the use of gait analysis has been limited. Existing gait analysis systems are either expensive, intrusive, or require well-controlled environments such as a clinic or a laboratory. We present an accurate gait analysis system that is economical and non-intrusive. Our system is based on the Kinect sensor and thus can extract comprehensive gait information from all parts of the body. Beyond standard stride information, we also measure arm kinematics, demonstrating the wide range of parameters that can be extracted. We further improve over existing work by using information from the entire body to more accurately measure stride intervals. Our system requires no markers or battery-powered sensors, and instead relies on a single, inexpensive commodity 3D sensor with a large preexisting install base. We suggest that the proposed technique can be used for continuous gait tracking at home.",
"title": ""
},
{
"docid": "pos:1840169_1",
"text": "Recent advances in visual tracking methods allow following a given object or individual in presence of significant clutter or partial occl usions in a single or a set of overlapping camera views. The question of when person detections in different views or at different time instants can be linked to the same individual is of funda mental importance to the video analysis in large-scale network of cameras. This is the pers on reidentification problem. The paper focuses on algorithms that use the overall appearance of an individual as opposed to passive biometrics such as face and gait. Methods that effec tively address the challenges associated with changes in illumination, pose, and clothing a ppearance variation are discussed. More specifically, the development of a set of models that ca pture the overall appearance of an individual and can effectively be used for information retrieval are reviewed. Some of them provide a holistic description of a person, and some o th rs require an intermediate step where specific body parts need to be identified. Some ar e designed to extract appearance features over time, and some others can operate reliabl y also on single images. The paper discusses algorithms for speeding up the computation of signatures. In particular it describes very fast procedures for computing co-occurrenc e matrices by leveraging a generalization of the integral representation of images. The alg orithms are deployed and tested in a camera network comprising of three cameras with non-overl apping field of views, where a multi-camera multi-target tracker links the tracks in dif ferent cameras by reidentifying the same people appearing in different views.",
"title": ""
}
] | [
{
"docid": "neg:1840169_0",
"text": "This research investigates the influence of religious preference and practice on the use of contraception. Much of earlier research examines the level of religiosity on sexual activity. This research extends this reasoning by suggesting that peer group effects create a willingness to mask the level of sexuality through the use of contraception. While it is understood that certain religions, that is, Catholicism does not condone the use of contraceptives, this research finds that Catholics are more likely to use certain methods of contraception than other religious groups. With data on contraceptive use from the Center for Disease Control’s Family Growth Survey, a likelihood probability model is employed to investigate the impact religious affiliation on contraception use. Findings suggest a preference for methods that ensure non-pregnancy while preventing feelings of shame and condemnation in their religious communities.",
"title": ""
},
{
"docid": "neg:1840169_1",
"text": "Biological processes are complex phenomena involving a series of events that are related to one another through various relationships. Systems that can understand and reason over biological processes would dramatically improve the performance of semantic applications involving inference such as question answering (QA) – specifically “How?” and “Why?” questions. In this paper, we present the task of process extraction, in which events within a process and the relations between the events are automatically extracted from text. We represent processes by graphs whose edges describe a set of temporal, causal and co-reference event-event relations, and characterize the structural properties of these graphs (e.g., the graphs are connected). Then, we present a method for extracting relations between the events, which exploits these structural properties by performing joint inference over the set of extracted relations. On a novel dataset containing 148 descriptions of biological processes (released with this paper), we show significant improvement comparing to baselines that disregard process structure.",
"title": ""
},
{
"docid": "neg:1840169_2",
"text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.",
"title": ""
},
{
"docid": "neg:1840169_3",
"text": "In this paper an impulse-radio ultra-wideband (IR-UWB) hardware demonstrator is presented, which can be used as a radar sensor for highly precise object tracking and breath-rate sensing. The hardware consists of an impulse generator integrated circuit (IC) in the transmitter and a correlator IC with an integrating baseband circuit as correlation receiver. The radiated impulse is close to a fifth Gaussian derivative impulse with σ = 51 ps, efficiently using the Federal Communications Commission indoor mask. A detailed evaluation of the hardware is given. For the tracking, an impulse train is radiated by the transmitter, and the reflections of objects in front of the sensor are collected by the receiver. With the reflected signals, a continuous hardware correlation is computed by a sweeping impulse correlation. The correlation is applied to avoid sampling of the RF impulse with picosecond precision. To localize objects precisely in front of the sensor, three impulse tracking methods are compared: Tracking of the maximum impulse peak, tracking of the impulse slope, and a slope-to-slope tracking of the object's reflection and the signal of the static direct coupling between transmit and receive antenna; the slope-to-slope tracking showing the best performance. The precision of the sensor is shown by a measurement with a metal plate of 1-mm sinusoidal deviation, which is clearly resolved. Further measurements verify the use of the demonstrated principle as a breathing sensor. The breathing signals of male humans and a seven-week-old infant are presented, qualifying the IR-UWB radar principle as a useful tool for breath-rate determination.",
"title": ""
},
{
"docid": "neg:1840169_4",
"text": "A wide variety of smartphone applications today rely on third-party advertising services, which provide libraries that are linked into the hosting application. This situation is undesirable for both the application author and the advertiser. Advertising libraries require their own permissions, resulting in additional permission requests to users. Likewise, a malicious application could simulate the behavior of the advertising library, forging the user’s interaction and stealing money from the advertiser. This paper describes AdSplit, where we extended Android to allow an application and its advertising to run as separate processes, under separate user-ids, eliminating the need for applications to request permissions on behalf of their advertising libraries, and providing services to validate the legitimacy of clicks, locally and remotely. AdSplit automatically recompiles apps to extract their ad services, and we measure minimal runtime overhead. AdSplit also supports a system resource that allows advertisements to display their content in an embedded HTML widget, without requiring any native code.",
"title": ""
},
{
"docid": "neg:1840169_5",
"text": "We propose a novel 3D integration method, called Vertical integration after Stacking (ViaS) process. The process enables 3D integration at significantly low cost, since it eliminates costly processing steps such as chemical vapor deposition used to form inorganic insulator layers and Cu plating used for via filling of vertical conductors. Furthermore, the technique does not require chemical-mechanical polishing (CMP) nor temporary bonding to handle thin wafers. The integration technique consists of forming through silicon via (TSV) holes in pre-multi-stacked wafers (> 2 wafers) which have no initial vertical electrical interconnections, followed by insulation of holes by polymer coating and via filling by molten metal injection. In the technique, multiple wafers are etched at once to form TSV holes followed by coating of the holes by conformal thin polymer layers. Finally the holes are filled by using molten metal injection so that a formation of interlayer connections of arbitrary choice is possible. In this paper, we demonstrate 3-chip-stacked test vehicle with 50 × 50 μm-square TSVs assembled by using this technique.",
"title": ""
},
{
"docid": "neg:1840169_6",
"text": "The human arm has 7 degrees of freedom (DOF) while only 6 DOF are required to position the wrist and orient the palm. Thus, the inverse kinematics of an human arm has a nonunique solution. Resolving this redundancy becomes critical as the human interacts with a wearable robot and the inverse kinematics solution of these two coupled systems must be identical to guarantee an seamless integration. The redundancy of the arm can be formulated by defining the swivel angle, the rotation angle of the plane defined by the upper and lower arm around a virtual axis that connects the shoulder and wrist joints. Analyzing reaching tasks recorded with a motion capture system indicates that the swivel angle is selected such that when the elbow joint is flexed, the palm points to the head. Based on these experimental results, a new criterion is formed to resolve the human arm redundancy. This criterion was implemented into the control algorithm of an upper limb 7-DOF wearable robot. Experimental results indicate that by using the proposed redundancy resolution criterion, the error between the predicted and the actual swivel angle adopted by the motor control system is less then 5°.",
"title": ""
},
{
"docid": "neg:1840169_7",
"text": "Classical conditioning, the simplest form of associative learning, is one of the most studied paradigms in behavioural psychology. Since the formal description of classical conditioning by Pavlov, lesion studies in animals have identified a number of anatomical structures involved in, and necessary for, classical conditioning. In the 1980s, with the advent of functional brain imaging techniques, particularly positron emission tomography (PET), it has been possible to study the functional anatomy of classical conditioning in humans. The development of functional magnetic resonance imaging (fMRI)--in particular single-trial or event-related fMRI--has now considerably advanced the potential of neuroimaging for the study of this form of learning. Recent event-related fMRI and PET studies are adding crucial data to the current discussion about the putative role of the amygdala in classical fear conditioning in humans.",
"title": ""
},
{
"docid": "neg:1840169_8",
"text": "The Scientific Computation Language (SCL) was designed mainly for developing computational models in education and research. This paper presents the justification for such a language, its relevant features, and a case study of a computational model implemented with the SCL.\n Development of the SCL language is part of the OOPsim project, which has had partial NSF support (CPATH). One of the goals of this project is to develop tools and approaches for designing and implementing computational models, emphasizing multi-disciplinary teams in the development process.\n A computational model is a computer implementation of the solution to a (scientific) problem for which a mathematical representation has been formulated. Developing a computational model consists of applying Computer Science concepts, principles and methods.\n The language syntax is defined at a higher level of abstraction than C, and includes language statements for improving program readability, debugging, maintenance, and correctness. The language design was influenced by Ada, Pascal, Eiffel, Java, C, and C++.\n The keywords have been added to maintain full compatibility with C. The SCL language translator is an executable program that is implemented as a one-pass language processor that generates C source code. The generated code can be integrated conveniently with any C and/or C++ library, on Linux and Windows (and MacOS). The semantics of SCL is informally defined to be the same C semantics.",
"title": ""
},
{
"docid": "neg:1840169_9",
"text": "The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.",
"title": ""
},
{
"docid": "neg:1840169_10",
"text": "In this paper we study how individual sensors can compress their observations in a privacy-preserving manner. We propose a randomized requantization scheme that guarantees local differential privacy, a strong model for privacy in which individual data holders must mask their information before sending it to an untrusted third party. For our approach, the problem becomes an optimization over discrete mem-oryless channels between the sensor observations and their compressed version. We show that for a fixed compression ratio, finding privacy-optimal channel subject to a distortion constraint is a quasiconvex optimization problem that can be solved by the bisection method. Our results indicate interesting tradeoffs between the privacy risk, compression ratio, and utility, or distortion. For example, in the low distortion regime, we can halve the bit rate at little cost in distortion while maintaining the same privacy level. We illustrate our approach for a simple example of privatizing and recompressing lowpass signals and show that it yields better tradeoffs than existing approaches based on noise addition. Our approach may be useful in several privacy-sensitive monitoring applications envisioned for the Internet of Things (IoT).",
"title": ""
},
{
"docid": "neg:1840169_11",
"text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.",
"title": ""
},
{
"docid": "neg:1840169_12",
"text": "OBJECTIVE\n: The objective of this study was to examine the histologic features of the labia minora, within the context of the female sexual response.\n\n\nMETHODS\n: Eight cadaver vulvectomy specimens were used for this study. All specimens were embedded in paraffin and were serially sectioned. Selected sections were stained with hematoxylin and eosin, elastic Masson trichrome, and S-100 antibody stains.\n\n\nRESULTS\n: The labia minora are thinly keratinized structures. The primary supporting tissue is collagen, with many vascular and neural elements structures throughout its core and elastin interspersed throughout.\n\n\nCONCLUSIONS\n: The labia minora are specialized, highly vascular folds of tissue with an abundance of neural elements. These features corroborate previous functional and observational data that the labia minora engorge with arousal and have a role in the female sexual response.",
"title": ""
},
{
"docid": "neg:1840169_13",
"text": "Knowledge base (KB) sharing among parties has been proven to be beneficial in several scenarios. However such sharing can arise considerable privacy concerns depending on the sensitivity of the information stored in each party's KB. In this paper, we focus on the problem of exporting a (part of a) KB of a party towards a receiving one. We introduce a novel solution that enables parties to export data in a privacy-preserving fashion, based on a probabilistic data structure, namely the \\emph{count-min sketch}. With this data structure, KBs can be exported in the form of key-value stores and inserted into a set of count-min sketches, where keys can be sensitive and values are counters. Count-min sketches can be tuned to achieve a given key collision probability, which enables a party to deny having certain keys in its own KB, and thus to preserve its privacy. We also introduce a metric, the γ-deniability (novel for count-min sketches), to measure the privacy level obtainable with a count-min sketch. Furthermore, since the value associated to a key can expose to linkage attacks, noise can be added to a count-min sketch to ensure controlled error on retrieved values. Key collisions and noise alter the values contained in the exported KB, and can affect negatively the accuracy of a computation performed on the exported KB. We explore the tradeoff between privacy preservation and computation accuracy by experimental evaluations in two scenarios related to malware detection.",
"title": ""
},
{
"docid": "neg:1840169_14",
"text": "The study of microstrip patch antennas has made great progress in recent years. Compared with conventional antennas, microstrip patch antennas have more advantages and better prospects. They are lighter in weight, low volume, low cost, low profile, smaller in dimension and ease of fabrication and conformity. Moreover, the microstrip patch antennas can provide dual and circular polarizations, dual-frequency operation, frequency agility, broad band-width, feedline flexibility, beam scanning omnidirectional patterning. In this paper we discuss the microstrip antenna, types of microstrip antenna, feeding techniques and application of microstrip patch antenna with their advantage and disadvantages over conventional microwave antennas.",
"title": ""
},
{
"docid": "neg:1840169_15",
"text": "Interest in supply chain management has steadily increased since the 1980s when firms saw the benefits of collaborative relationships within and beyond their own organization. Firms are finding that they can no longer compete effectively in isolation of their suppliers or other entities in the supply chain. A number of definitions of supply chain management have been proposed in the literature and in practice. This paper defines the concept of supply chain management and discusses its historical evolution. The term does not replace supplier partnerships, nor is it a description of the logistics function. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Introduction to supply chain concepts Firms can no longer effectively compete in isolation of their suppliers and other entities in the supply chain. Interest in the concept of supply chain management has steadily increased since the 1980s when companies saw the benefits of collaborative relationships within and beyond their own organization. A number of definitions have been proposed concerning the concept of “the supply chain” and its management. This paper defines the concept of the supply chain and discusses the evolution of supply chain management. The term does not replace supplier partnerships, nor is it a description of the logistics function. Industry groups are now working together to improve the integrative processes of supply chain management and accelerate the benefits available through successful implementation. The competitive importance of linking a firm’s supply chain strategy to its overall business strategy and some practical guidelines are offered for successful supply chain management. Definition of supply chain Various definitions of a supply chain have been offered in the past several years as the concept has gained popularity. The APICS Dictionary describes the supply chain as: 1 the processes from the initial raw materials to the ultimate consumption of the finished product linking across supplieruser companies; and 2 the functions within and outside a company that enable the value chain to make products and provide services to the customer (Cox et al., 1995). Another source defines supply chain as, the network of entities through which material flows. Those entities may include suppliers, carriers, manufacturing sites, distribution centers, retailers, and customers (Lummus and Alber, 1997). The Supply Chain Council (1997) uses the definition: “The supply chain – a term increasingly used by logistics professionals – encompasses every effort involved in producing and delivering a final product, from the supplier’s supplier to the customer’s customer. Four basic processes – plan, source, make, deliver – broadly define these efforts, which include managing supply and demand, sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, and delivery to the customer.” Quinn (1997) defines the supply chain as “all of those activities associated with moving goods from the raw-materials stage through to the end user. This includes sourcing and procurement, production scheduling, order processing, inventory management, transportation, warehousing, and customer service. Importantly, it also embodies the information systems so necessary to monitor all of those activities.” In addition to defining the supply chain, several authors have further defined the concept of supply chain management. As defined by Ellram and Cooper (1993), supply chain management is “an integrating philosophy to manage the total flow of a distribution channel from supplier to ultimate customer”. Monczka and Morgan (1997) state that “integrated supply chain management is about going from the external customer and then managing all the processes that are needed to provide the customer with value in a horizontal way”. They believe that supply chains, not firms, compete and that those who will be the strongest competitors are those that “can provide management and leadership to the fully integrated supply chain including external customer as well as prime suppliers, their suppliers, and their suppliers’ suppliers”. From these definitions, a summary definition of the supply chain can be stated as: all the activities involved in delivering a product from raw material through to the customer including sourcing raw materials and parts, manufacturing and assembly, warehousing and inventory tracking, order entry and order management, distribution across all channels, delivery to the customer, and the information systems necessary to monitor all of these activities. Supply chain management coordinates and integrates all of these activities into a seamless process. It links all of the partners in the chain including departments",
"title": ""
},
{
"docid": "neg:1840169_16",
"text": "In recent years both practitioners and academics have shown an increasing interest in the assessment of marketing -performance. This paper explores the metrics that firms select and some reasons for those choices. Our data are drawn from two UK studies. The first reports practitioner usage by the main metrics categories (consumer behaviour and intermediate, trade customer, competitor, accounting and innovativeness). The second considers which individual metrics are seen as the most important and whether that differs by sector. The role of brand equity in performance assessment and top",
"title": ""
},
{
"docid": "neg:1840169_17",
"text": "We use variation at a set of eight human Y chromosome microsatellite loci to investigate the demographic history of the Y chromosome. Instead of assuming a population of constant size, as in most of the previous work on the Y chromosome, we consider a model which permits a period of recent population growth. We show that for most of the populations in our sample this model fits the data far better than a model with no growth. We estimate the demographic parameters of this model for each population and also the time to the most recent common ancestor. Since there is some uncertainty about the details of the microsatellite mutation process, we consider several plausible mutation schemes and estimate the variance in mutation size simultaneously with the demographic parameters of interest. Our finding of a recent common ancestor (probably in the last 120,000 years), coupled with a strong signal of demographic expansion in all populations, suggests either a recent human expansion from a small ancestral population, or natural selection acting on the Y chromosome.",
"title": ""
}
] |
1840170 | A Quick Startup Technique for High- $Q$ Oscillators Using Precisely Timed Energy Injection | [
{
"docid": "pos:1840170_0",
"text": "A 51.3-MHz 18-<inline-formula><tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> 21.8-ppm/°C relaxation oscillator is presented in 90-nm CMOS. The proposed oscillator employs an integrated error feedback and composite resistors to minimize its sensitivity to temperature variations. For a temperature range from −20 °C to 100 °C, the fabricated circuit demonstrates a frequency variation less than ±0.13%, leading to an average frequency drift of 21.8 ppm/°C. As the supply voltage changes from 0.8 to 1.2 V, the frequency variation is ±0.53%. The measured rms jitter and phase noise at 1-MHz offset are 89.27 ps and −83.29 dBc/Hz, respectively.",
"title": ""
},
{
"docid": "pos:1840170_1",
"text": "The design of a 1.8 GHz 3-stage current-starved ring oscillator with a process- and temperature- compensated current source is presented. Without post-fabrication calibration or off-chip components, the proposed low variation circuit is able to achieve a 65.1% reduction in the normalized standard deviation of its center frequency at room temperature and 85 ppm/ ° C temperature stability with no penalty in the oscillation frequency, the phase noise or the start-up time. Analysis on the impact of transistor scaling indicates that the same circuit topology can be applied to improve variability as feature size scales beyond the current deep submicron technology. Measurements taken on 167 test chips from two different lots fabricated in a standard 90 nm CMOS process show a 3x improvement in frequency variation compared to the baseline case of a conventional current-starved ring oscillator. The power and area for the proposed circuitry is 87 μW and 0.013 mm2 compared to 54 μ W and 0.01 mm 2 in the baseline case.",
"title": ""
}
] | [
{
"docid": "neg:1840170_0",
"text": "The present study represents the first large-scale, prospective comparison to test whether aging out of foster care contributes to homelessness risk in emerging adulthood. A nationally representative sample of adolescents investigated by the child welfare system in 2008 to 2009 from the second cohort of the National Survey of Child and Adolescent Well-being Study (NSCAW II) reported experiences of housing problems at 18- and 36-month follow-ups. Latent class analyses identified subtypes of housing problems, including literal homelessness, housing instability, and stable housing. Regressions predicted subgroup membership based on aging out experiences, receipt of foster care services, and youth and county characteristics. Youth who reunified after out-of-home placement in adolescence exhibited the lowest probability of literal homelessness, while youth who aged out experienced similar rates of literal homelessness as youth investigated by child welfare but never placed out of home. No differences existed between groups on prevalence of unstable housing. Exposure to independent living services and extended foster care did not relate with homelessness prevention. Findings emphasize the developmental importance of families in promoting housing stability in the transition to adulthood, while questioning child welfare current focus on preparing foster youth to live.",
"title": ""
},
{
"docid": "neg:1840170_1",
"text": "Extractive style query oriented multi document summariza tion generates the summary by extracting a proper set of sentences from multiple documents based on the pre given query. This paper proposes a novel multi document summa rization framework via deep learning model. This uniform framework consists of three parts: concepts extraction, summary generation, and reconstruction validation, which work together to achieve the largest coverage of the docu ments content. A new query oriented extraction technique is proposed to concentrate distributed information to hidden units layer by layer. Then, the whole deep architecture is fi ne tuned by minimizing the information loss of reconstruc tion validation. According to the concentrated information, dynamic programming is used to seek most informative set of sentences as the summary. Experiments on three bench mark datasets demonstrate the effectiveness of the proposed framework and algorithms.",
"title": ""
},
{
"docid": "neg:1840170_2",
"text": "OBJECTIVES\nTo investigate the ability of cerebrospinal fluid (CSF) and plasma measures to discriminate early-stage Alzheimer disease (AD) (defined by clinical criteria and presence/absence of brain amyloid) from nondemented aging and to assess whether these biomarkers can predict future dementia in cognitively normal individuals.\n\n\nDESIGN\nEvaluation of CSF beta-amyloid(40) (Abeta(40)), Abeta(42), tau, phosphorylated tau(181), and plasma Abeta(40) and Abeta(42) and longitudinal clinical follow-up (from 1 to 8 years).\n\n\nSETTING\nLongitudinal studies of healthy aging and dementia through an AD research center.\n\n\nPARTICIPANTS\nCommunity-dwelling volunteers (n = 139) aged 60 to 91 years and clinically judged as cognitively normal (Clinical Dementia Rating [CDR], 0) or having very mild (CDR, 0.5) or mild (CDR, 1) AD dementia.\n\n\nRESULTS\nIndividuals with very mild or mild AD have reduced mean levels of CSF Abeta(42) and increased levels of CSF tau and phosphorylated tau(181). Cerebrospinal fluid Abeta(42) level completely corresponds with the presence or absence of brain amyloid (imaged with Pittsburgh Compound B) in demented and nondemented individuals. The CSF tau/Abeta(42) ratio (adjusted hazard ratio, 5.21; 95% confidence interval, 1.58-17.22) and phosphorylated tau(181)/Abeta(42) ratio (adjusted hazard ratio, 4.39; 95% confidence interval, 1.62-11.86) predict conversion from a CDR of 0 to a CDR greater than 0.\n\n\nCONCLUSIONS\nThe very mildest symptomatic stage of AD exhibits the same CSF biomarker phenotype as more advanced AD. In addition, levels of CSF Abeta(42), when combined with amyloid imaging, augment clinical methods for identifying in individuals with brain amyloid deposits whether dementia is present or not. Importantly, CSF tau/Abeta(42) ratios show strong promise as antecedent (preclinical) biomarkers that predict future dementia in cognitively normal older adults.",
"title": ""
},
{
"docid": "neg:1840170_3",
"text": "Traceability—the ability to follow the life of software artifacts—is a topic of great interest to software developers in general, and to requirements engineers and model-driven developers in particular. This article aims to bring those stakeholders together by providing an overview of the current state of traceability research and practice in both areas. As part of an extensive literature survey, we identify commonalities and differences in these areas and uncover several unresolved challenges which affect both domains. A good common foundation for further advances regarding these challenges appears to be a combination of the formal basis and the automated recording opportunities of MDD on the one hand, and the more holistic view of traceability in the requirements engineering domain on the other hand.",
"title": ""
},
{
"docid": "neg:1840170_4",
"text": "We propose a framework for ensuring safe behavior of a reinforcement learning agent when the reward function may be difficult to specify. In order to do this, we rely on the existence of demonstrations from expert policies, and we provide a theoretical framework for the agent to optimize in the space of rewards consistent with its existing knowledge. We propose two methods to solve the resulting optimization: an exact ellipsoid-based method and a method in the spirit of the \"follow-the-perturbed-leader\" algorithm. Our experiments demonstrate the behavior of our algorithm in both discrete and continuous problems. The trained agent safely avoids states with potential negative effects while imitating the behavior of the expert in the other states.",
"title": ""
},
{
"docid": "neg:1840170_5",
"text": "Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications.",
"title": ""
},
{
"docid": "neg:1840170_6",
"text": "A miniaturized quadrature hybrid coupler, a rat-race coupler, and a 4 times 4 Butler matrix based on a newly proposed planar artificial transmission line are presented in this paper for application in ultra-high-frequency (UHF) radio-frequency identification (RFID) systems. This planar artificial transmission line is composed of microstrip quasi-lumped elements and their discontinuities and is capable of synthesizing microstrip lines with various characteristic impedances and electrical lengths. At the center frequency of the UHF RFID system, the occupied sizes of the proposed quadrature hybrid and rat-race couplers are merely 27% and 9% of those of the conventional designs. The miniaturized couplers demonstrate well-behaved wideband responses with no spurious harmonics up to two octaves. The measured results reveal excellent agreement with the simulations. Additionally, a 4 times 4 Butler matrix, which may occupy a large amount of circuit area in conventional designs, has been successfully miniaturized with the help of the proposed artificial transmission line. The circuit size of the Butler matrix is merely 21% of that of a conventional design. The experimental results show that the proposed Butler matrix features good phase control, nearly equal power splitting, and compact size and is therefore applicable to the reader modules in various RFID systems.",
"title": ""
},
{
"docid": "neg:1840170_7",
"text": "Smart electricity meters are currently deployed in millions of households to collect detailed individual electricity consumption data. Compared with traditional electricity data based on aggregated consumption, smart meter data are much more volatile and less predictable. There is a need within the energy industry for probabilistic forecasts of household electricity consumption to quantify the uncertainty of future electricity demand in order to undertake appropriate planning of generation and distribution. We propose to estimate an additive quantile regression model for a set of quantiles of the future distribution using a boosting procedure. By doing so, we can benefit from flexible and interpretable models, which include an automatic variable selection. We compare our approach with three benchmark methods on both aggregated and disaggregated scales using a smart meter data set collected from 3639 households in Ireland at 30-min intervals over a period of 1.5 years. The empirical results demonstrate that our approach based on quantile regression provides better forecast accuracy for disaggregated demand, while the traditional approach based on a normality assumption (possibly after an appropriate Box-Cox transformation) is a better approximation for aggregated demand. These results are particularly useful since more energy data will become available at the disaggregated level in the future.",
"title": ""
},
{
"docid": "neg:1840170_8",
"text": "Deep generative neural networks have proven effective at both conditional and unconditional modeling of complex data distributions. Conditional generation enables interactive control, but creating new controls often requires expensive retraining. In this paper, we develop a method to condition generation without retraining the model. By post-hoc learning latent constraints, value functions that identify regions in latent space that generate outputs with desired attributes, we can conditionally sample from these regions with gradient-based optimization or amortized actor functions. Combining attribute constraints with a universal “realism” constraint, which enforces similarity to the data distribution, we generate realistic conditional images from an unconditional variational autoencoder. Further, using gradient-based optimization, we demonstrate identity-preserving transformations that make the minimal adjustment in latent space to modify the attributes of an image. Finally, with discrete sequences of musical notes, we demonstrate zero-shot conditional generation, learning latent constraints in the absence of labeled data or a differentiable reward function. Code with dedicated cloud instance has been made publicly available (https://goo.gl/STGMGx).",
"title": ""
},
{
"docid": "neg:1840170_9",
"text": "Triboelectric nanogenerator (TENG) technology has emerged as a new mechanical energy harvesting technology with numerous advantages. This paper analyzes its charging behavior together with a load capacitor. Through numerical and analytical modeling, the charging performance of a TENG with a bridge rectifier under periodic external mechanical motion is completely analogous to that of a dc voltage source in series with an internal resistance. An optimum load capacitance that matches the TENGs impedance is observed for the maximum stored energy. This optimum load capacitance is theoretically detected to be linearly proportional to the charging cycle numbers and the inherent TENG capacitance. Experiments were also performed to further validate our theoretical anticipation and show the potential application of this paper in guiding real experimental designs.",
"title": ""
},
{
"docid": "neg:1840170_10",
"text": "This paper presents a method for speech emotion recognition using spectrograms and deep convolutional neural network (CNN). Spectrograms generated from the speech signals are input to the deep CNN. The proposed model consisting of three convolutional layers and three fully connected layers extract discriminative features from spectrogram images and outputs predictions for the seven emotions. In this study, we trained the proposed model on spectrograms obtained from Berlin emotions dataset. Furthermore, we also investigated the effectiveness of transfer learning for emotions recognition using a pre-trained AlexNet model. Preliminary results indicate that the proposed approach based on freshly trained model is better than the fine-tuned model, and is capable of predicting emotions accurately and efficiently.",
"title": ""
},
{
"docid": "neg:1840170_11",
"text": "We present short elementary proofs of the well-known Ruffini-Abel-Galois theorems on unsolvability of algebraic equations in radicals. This proof is obtained from existing expositions by stripping away material not required for the proof (but presumably required elsewhere). In particular, we do not use the terms ‘Galois group’ and even ‘group’. However, our presentation is a good way to learn (or recall) the starting idea of Galois theory: to look at how the symmetry of a polynomial is decreased when a radical is extracted. So the note provides a bridge (by showing that there is no gap) between elementary mathematics and Galois theory. The note is accessible to students familiar with polynomials, complex numbers and permutations; so the note might be interesting easy reading for professional mathematicians.",
"title": ""
},
{
"docid": "neg:1840170_12",
"text": "Fully homomorphic encryption is faced with two problems now. One is candidate fully homomorphic encryption schemes are few. Another is that the efficiency of fully homomorphic encryption is a big question. In this paper, we propose a fully homomorphic encryption scheme based on LWE, which has better key size. Our main contributions are: (1) According to the binary-LWE recently, we choose secret key from binary set and modify the basic encryption scheme proposed in Linder and Peikert in 2010. We propose a fully homomorphic encryption scheme based on the new basic encryption scheme. We analyze the correctness and give the proof of the security of our scheme. The public key, evaluation keys and tensored ciphertext have better size in our scheme. (2) Estimating parameters for fully homomorphic encryption scheme is an important work. We estimate the concert parameters for our scheme. We compare these parameters between our scheme and Bra12 scheme. Our scheme have public key and private key that smaller by a factor of about logq than in Bra12 scheme. Tensored ciphertext in our scheme is smaller by a factor of about log2q than in Bra12 scheme. Key switching matrix in our scheme is smaller by a factor of about log3q than in Bra12 scheme.",
"title": ""
},
{
"docid": "neg:1840170_13",
"text": "Skeletal muscle atrophy is a debilitating response to starvation and many systemic diseases including diabetes, cancer, and renal failure. We had proposed that a common set of transcriptional adaptations underlie the loss of muscle mass in these different states. To test this hypothesis, we used cDNA microarrays to compare the changes in content of specific mRNAs in muscles atrophying from different causes. We compared muscles from fasted mice, from rats with cancer cachexia, streptozotocin-induced diabetes mellitus, uremia induced by subtotal nephrectomy, and from pair-fed control rats. Although the content of >90% of mRNAs did not change, including those for the myofibrillar apparatus, we found a common set of genes (termed atrogins) that were induced or suppressed in muscles in these four catabolic states. Among the strongly induced genes were many involved in protein degradation, including polyubiquitins, Ub fusion proteins, the Ub ligases atrogin-1/MAFbx and MuRF-1, multiple but not all subunits of the 20S proteasome and its 19S regulator, and cathepsin L. Many genes required for ATP production and late steps in glycolysis were down-regulated, as were many transcripts for extracellular matrix proteins. Some genes not previously implicated in muscle atrophy were dramatically up-regulated (lipin, metallothionein, AMP deaminase, RNA helicase-related protein, TG interacting factor) and several growth-related mRNAs were down-regulated (P311, JUN, IGF-1-BP5). Thus, different types of muscle atrophy share a common transcriptional program that is activated in many systemic diseases.",
"title": ""
},
{
"docid": "neg:1840170_14",
"text": "Decision Support Systems (DDS) have developed to exploit Information Technology (IT) to assist decision-makers in a wide variety of fields. The need to use spatial data in many of these diverse fields has led to increasing interest in the development of Spatial Decision Support Systems (SDSS) based around the Geographic Information System (GIS) technology. The paper examines the relationship between SDSS and GIS and suggests that SDSS is poised for further development owing to improvement in technology and the greater availability of spatial data.",
"title": ""
},
{
"docid": "neg:1840170_15",
"text": "How the circadian clock regulates the timing of sleep is poorly understood. Here, we identify a Drosophila mutant, wide awake (wake), that exhibits a marked delay in sleep onset at dusk. Loss of WAKE in a set of arousal-promoting clock neurons, the large ventrolateral neurons (l-LNvs), impairs sleep onset. WAKE levels cycle, peaking near dusk, and the expression of WAKE in l-LNvs is Clock dependent. Strikingly, Clock and cycle mutants also exhibit a profound delay in sleep onset, which can be rescued by restoring WAKE expression in LNvs. WAKE interacts with the GABAA receptor Resistant to Dieldrin (RDL), upregulating its levels and promoting its localization to the plasma membrane. In wake mutant l-LNvs, GABA sensitivity is decreased and excitability is increased at dusk. We propose that WAKE acts as a clock output molecule specifically for sleep, inhibiting LNvs at dusk to promote the transition from wake to sleep.",
"title": ""
},
{
"docid": "neg:1840170_16",
"text": "Wafer Level Package (WLP) technology has seen tremendous advances in recent years and is rapidly being adopted at the 65nm Low-K silicon node. For a true WLP, the package size is same as the die (silicon) size and the package is usually mounted directly on to the Printed Circuit Board (PCB). Board level reliability (BLR) is a bigger challenge on WLPs than the package level due to a larger CTE mismatch and difference in stiffness between silicon and the PCB [1]. The BLR performance of the devices with Low-K dielectric silicon becomes even more challenging due to their fragile nature and lower mechanical strength. A post fab re-distribution layer (RDL) with polymer stack up provides a stress buffer resulting in an improved board level reliability performance. Drop shock (DS) and temperature cycling test (TCT) are the most commonly run tests in the industry to gauge the BLR performance of WLPs. While a superior drop performance is required for devices targeting mobile handset applications, achieving acceptable TCT performance on WLPs can become challenging at times. BLR performance of WLP is sensitive to design features such as die size, die aspect ratio, ball pattern and ball density etc. In this paper, 65nm WLPs with a post fab Cu RDL have been studied for package and board level reliability. Standard JEDEC conditions are applied during the reliability testing. Here, we present a detailed reliability evaluation on multiple WLP sizes and varying ball patterns. Die size ranging from 10 mm2 to 25 mm2 were studied along with variation in design features such as die aspect ratio and the ball density (fully populated and de-populated ball pattern). All test vehicles used the aforementioned 65nm fab node.",
"title": ""
},
{
"docid": "neg:1840170_17",
"text": "Wheat straw is an abundant agricultural residue with low commercial value. An attractive alternative is utilization of wheat straw for bioethanol production. However, production costs based on the current technology are still too high, preventing commercialization of the process. In recent years, progress has been made in developing more effective pretreatment and hydrolysis processes leading to higher yield of sugars. The focus of this paper is to review the most recent advances in pretreatment, hydrolysis and fermentation of wheat straw. Based on the type of pretreatment method applied, a sugar yield of 74-99.6% of maximum theoretical was achieved after enzymatic hydrolysis of wheat straw. Various bacteria, yeasts and fungi have been investigated with the ethanol yield ranging from 65% to 99% of theoretical value. So far, the best results with respect to ethanol yield, final ethanol concentration and productivity were obtained with the native non-adapted Saccharomyses cerevisiae. Some recombinant bacteria and yeasts have shown promising results and are being considered for commercial scale-up. Wheat straw biorefinery could be the near-term solution for clean, efficient and economically-feasible production of bioethanol as well as high value-added products.",
"title": ""
},
{
"docid": "neg:1840170_18",
"text": "This study investigated whether psychologists' confidence in their clinical decisions is really justified. It was hypothesized that as psychologists study information about a case (a) their confidence about the case increases markedly and steadily but (b) the accuracy of their conclusions about the case quickly reaches a ceiling. 32 judges, including 8 clinical psychologists, read background information about a published case, divided into 4 sections. After reading each section of the case, judges answered a set of 25 questions involving personality judgments about the case. Results strongly supported the hypotheses. Accuracy did not increase significantly with increasing information, but confidence increased steadily and significantly. All judges except 2 became overconfident, most of them markedly so. Clearly, increasing feelings of confidence are not a sure sign of increasing predictive accuracy about a case.",
"title": ""
},
{
"docid": "neg:1840170_19",
"text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.",
"title": ""
}
] |
1840171 | Authenticated Key Exchange over Bitcoin | [
{
"docid": "pos:1840171_0",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
},
{
"docid": "pos:1840171_1",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
}
] | [
{
"docid": "neg:1840171_0",
"text": "To predict the most salient regions of complex natural scenes, saliency models commonly compute several feature maps (contrast, orientation, motion...) and linearly combine them into a master saliency map. Since feature maps have different spatial distribution and amplitude dynamic ranges, determining their contributions to overall saliency remains an open problem. Most state-of-the-art models do not take time into account and give feature maps constant weights across the stimulus duration. However, visual exploration is a highly dynamic process shaped by many time-dependent factors. For instance, some systematic viewing patterns such as the center bias are known to dramatically vary across the time course of the exploration. In this paper, we use maximum likelihood and shrinkage methods to dynamically and jointly learn feature map and systematic viewing pattern weights directly from eye-tracking data recorded on videos. We show that these weights systematically vary as a function of time, and heavily depend upon the semantic visual category of the videos being processed. Our fusion method allows taking these variations into account, and outperforms other stateof-the-art fusion schemes using constant weights over time. The code, videos and eye-tracking data we used for this study are available online.",
"title": ""
},
{
"docid": "neg:1840171_1",
"text": "Attribute-based encryption (ABE) systems allow encrypting to uncertain receivers by means of an access policy specifying the attributes that the intended receivers should possess. ABE promises to deliver fine-grained access control of encrypted data. However, when data are encrypted using an ABE scheme, key management is difficult if there is a large number of users from various backgrounds. In this paper, we elaborate ABE and propose a new versatile cryptosystem referred to as ciphertext-policy hierarchical ABE (CPHABE). In a CP-HABE scheme, the attributes are organized in a matrix and the users having higher-level attributes can delegate their access rights to the users at a lower level. These features enable a CP-HABE system to host a large number of users from different organizations by delegating keys, e.g., enabling efficient data sharing among hierarchically organized large groups. We construct a CP-HABE scheme with short ciphertexts. The scheme is proven secure in the standard model under non-interactive assumptions.",
"title": ""
},
{
"docid": "neg:1840171_2",
"text": "Path analysis was used to test the predictive and mediational role of self-efficacy beliefs in mathematical problem solving. Results revealed that math self-efficacy was more predictive of problem solving than was math self-concept, perceived usefulness of mathematics, prior experience with mathematics, or gender (N = 350). Self-efficacy also mediated the effect of gender and prior experience on self-concept, perceived usefulness, and problem solving. Gender and prior experience influenced self-concept, perceived usefulness, and problem solving largely through the mediational role of self-efficacy. Men had higher performance, self-efficacy, and self-concept and lower anxiety, but these differences were due largely to the influence of self-efficacy, for gender had a direct effect only on self-efficacy and a prior experience variable. Results support the hypothesized role of self-efficacy in A. Bandura's (1986) social cognitive theory.",
"title": ""
},
{
"docid": "neg:1840171_3",
"text": "Recommendation system is a type of information filtering systems that recommend various objects from a vast variety and quantity of items which are of the user interest. This results in guiding an individual in personalized way to interesting or useful objects in a large space of possible options. Such systems also help many businesses to achieve more profits to sustain in their filed against their rivals. But looking at the amount of information which a business holds it becomes difficult to identify the items of user interest. Therefore personalization or user profiling is one of the challenging tasks that give access to user relevant information which can be used in solving the difficult task of classification and ranking items according to an individual’s interest. Profiling can be done in various ways such as supervised or unsupervised, individual or group profiling, distributive or and non-distributive profiling. Our focus in this paper will be on the dataset which we will use, we identify some interesting facts by using Weka Tool that can be used for recommending the items from dataset .Our aim is to present a novel technique to achieve user profiling in recommendation system. KeywordsMachine Learning; Information Retrieval; User Profiling",
"title": ""
},
{
"docid": "neg:1840171_4",
"text": "Being able to keep the graph scale small while capturing the properties of the original social graph, graph sampling provides an efficient, yet inexpensive solution for social network analysis. The challenge is how to create a small, but representative sample out of the massive social graph with millions or even billions of nodes. Several sampling algorithms have been proposed in previous studies, but there lacks fair evaluation and comparison among them. In this paper, we analyze the state-of art graph sampling algorithms and evaluate their performance on some widely recognized graph properties on directed graphs using large-scale social network datasets. We evaluate not only the commonly used node degree distribution, but also clustering coefficient, which quantifies how well connected are the neighbors of a node in a graph. Through the comparison we have found that none of the algorithms is able to obtain satisfied sampling results in both of these properties, and the performance of each algorithm differs much in different kinds of datasets.",
"title": ""
},
{
"docid": "neg:1840171_5",
"text": "This study presents an online multiparameter estimation scheme for interior permanent magnet motor drives that exploits the switching ripple of finite control set (FCS) model predictive control (MPC). The combinations consist of two, three, and four parameters are analysed for observability at different operating states. Most of the combinations are rank deficient without persistent excitation (PE) of the system, e.g. by signal injection. This study shows that high frequency current ripples by MPC with FCS are sufficient to create PE in the system. This study also analyses parameter coupling in estimation that results in wrong convergence and propose a decoupling technique. The observability conditions for all the combinations are experimentally validated. Finally, a full parameter estimation along with the decoupling technique is tested at different operating conditions.",
"title": ""
},
{
"docid": "neg:1840171_6",
"text": "BACKGROUND\nPrimary and tension-free closure of a flap is often required after particular surgical procedures (e.g., guided bone regeneration). Other times, flap advancement may be desired for situations such as root coverage.\n\n\nMETHODS\nThe literature was searched for articles that addressed techniques, limitations, and complications associated with flap advancement. These articles were used as background information. In addition, reference information regarding anatomy was cited as necessary to help describe surgical procedures.\n\n\nRESULTS\nThis article describes techniques to advance mucoperiosteal flaps, which facilitate healing. Methods are presented for a variety of treatment scenarios, ranging from minor to major coronal tissue advancement. Anatomic landmarks are identified that need to be considered during surgery. In addition, management of complications associated with flap advancement is discussed.\n\n\nCONCLUSIONS\nTension-free primary closure is attainable. The technique is dependent on the extent that the flap needs to be advanced.",
"title": ""
},
{
"docid": "neg:1840171_7",
"text": "In this paper, we argue for a theoretical separation of the free-energy principle from Helmholtzian accounts of the predictive brain. The free-energy principle is a theoretical framework capturing the imperative for biological self-organization in information-theoretic terms. The free-energy principle has typically been connected with a Bayesian theory of predictive coding, and the latter is often taken to support a Helmholtzian theory of perception as unconscious inference. If our interpretation is right, however, a Helmholtzian view of perception is incompatible with Bayesian predictive coding under the free-energy principle. We argue that the free energy principle and the ecological and enactive approach to mind and life make for a much happier marriage of ideas. We make our argument based on three points. First we argue that the free energy principle applies to the whole animal–environment system, and not only to the brain. Second, we show that active inference, as understood by the free-energy principle, is incompatible with unconscious inference understood as analagous to scientific hypothesis-testing, the main tenet of a Helmholtzian view of perception. Third, we argue that the notion of inference at work in Bayesian predictive coding under the free-energy principle is too weak to support a Helmholtzian theory of perception. Taken together these points imply that the free energy principle is best understood in ecological and enactive terms set out in this paper.",
"title": ""
},
{
"docid": "neg:1840171_8",
"text": "The filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) is being studied by many researchers as a key enabler for the fifth-generation air interface. In this paper, a hybrid peak-to-average power ratio (PAPR) reduction scheme is proposed for FBMC/OQAM signals by utilizing multi data block partial transmit sequence (PTS) and tone reservation (TR). In the hybrid PTS-TR scheme, the data blocks signal is divided into several segments, and the number of data blocks in each segment is determined by the overlapping factor. In each segment, we select the optimal data block to transmit and jointly consider the adjacent overlapped data block to achieve minimum signal power. Then, the peak reduction tones are utilized to cancel the peaks of the segment FBMC/OQAM signals. Simulation results and analysis show that the proposed hybrid PTS-TR scheme could provide better PAPR reduction than conventional PTS and TR schemes in FBMC/OQAM systems. Furthermore, we propose another multi data block hybrid PTS-TR scheme by exploiting the adjacent multi overlapped data blocks, called as the multi hybrid (M-hybrid) scheme. Simulation results show that the M-hybrid scheme can achieve about 0.2-dB PAPR performance better than the hybrid PTS-TR scheme.",
"title": ""
},
{
"docid": "neg:1840171_9",
"text": "Artificial neural networks (ANN) are used to predict 1) degree program completion, 2) earned hours, and 3) GPA for college students. The feed forward neural net architecture is used with the back propagation learning function and the logistic activation function. The database used for training and validation consisted of 17,476 student transcripts from Fall 1983 through Fall 1994. It is shown that age, gender, race, ACT scores, and reading level are significant in predicting the degree program completion, earned hours, and GPA. Of the three, earned hours proved the most difficult to predict.",
"title": ""
},
{
"docid": "neg:1840171_10",
"text": "In real-time applications, the computer is often required to service programs in response to external signals, and to guarantee that each such program is completely processed within a specified interval following the occurrence of the initiating signal. Such programs are referred to in this paper as time-critical processes, or TCPs.",
"title": ""
},
{
"docid": "neg:1840171_11",
"text": "This paper summarises the COSET shared task organised as part of the IberEval workshop. The aim of this task is to classify the topic discussed in a tweet into one of five topics related to the Spanish 2015 electoral cycle. A new dataset was curated for this task and hand-labelled by experts on the task. Moreover, the results of the 17 participants of the task and a review of their proposed systems are presented. In a second phase evaluation, we provided the participants with 15.8 millions tweets in order to test the scalability of their systems.",
"title": ""
},
{
"docid": "neg:1840171_12",
"text": "This paper presents the design of a compact UHF-RFID tag antenna with several miniaturization techniques including meandering technique and capacitive tip-loading structure. Additionally, T-matching technique is also utilized in the antenna design for impedance matching. This antenna was designed on Rogers 5880 printed circuit board (PCB) with the dimension of 43 × 26 × 0.787 mm3 and relative permittivity, □r of 2.2. The performance of the proposed antenna was analyzed in terms of matched impedance, antenna gain, return loss and tag reading range through the simulation in CST Microwave Studio software. As a result, the proposed antenna obtained a gain of 0.97dB and a maximum reading range of 5.15 m at 921 MHz.",
"title": ""
},
{
"docid": "neg:1840171_13",
"text": "Warm restart techniques on training deep neural networks often achieve better recognition accuracies and can be regarded as easy methods to obtain multiple neural networks with no additional training cost from a single training process. Ensembles of intermediate neural networks obtained by warm restart techniques can provide higher accuracy than a single neural network obtained finally by a whole training process. However, existing methods on both of warm restart and its ensemble techniques use fixed cyclic schedules and have little degree of parameter adaption. This paper extends a class of possible schedule strategies of warm restart, and clarifies their effectiveness for recognition performance. Specifically, we propose parameterized functions and various cycle schedules to improve recognition accuracies by the use of deep neural networks with no additional training cost. Experiments on CIFAR-10 and CIFAR-100 show that our methods can achieve more accurate rates than the existing cyclic training and ensemble methods.",
"title": ""
},
{
"docid": "neg:1840171_14",
"text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.",
"title": ""
},
{
"docid": "neg:1840171_15",
"text": "Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013 Abstract: This article reviews the common used forecast error measurements. All error measurements have been joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors, measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only one measure, the recommendations for selecting the appropriate error measurements are given.",
"title": ""
},
{
"docid": "neg:1840171_16",
"text": "With the advance of the Internet, e-commerce systems have become extremely important and convenient to human being. More and more products are sold on the web, and more and more people are purchasing products online. As a result, an increasing number of customers post product reviews at merchant websites and express their opinions and experiences in any network space such as Internet forums, discussion groups, and blogs. So there is a large amount of data records related to products on the Web, which are useful for both manufacturers and customers. Mining product reviews becomes a hot research topic, and prior researches mostly base on product features to analyze the opinions. So mining product features is the first step to further reviews processing. In this paper, we present how to mine product features. The proposed extraction approach is different from the previous methods because we only mine the features of the product in opinion sentences which the customers have expressed their positive or negative experiences on. In order to find opinion sentence, a SentiWordNet-based algorithm is proposed. There are three steps to perform our task: (1) identifying opinion sentences in each review which is positive or negative via SentiWordNet; (2) mining product features that have been commented on by customers from opinion sentences; (3) pruning feature to remove those incorrect features. Compared to previous work, our experimental result achieves higher precision and recall.",
"title": ""
},
{
"docid": "neg:1840171_17",
"text": "This paper presents a research model to explicate that the level of consumers’ participation on companies’ brand microblogs is influenced by their brand attachment process. That is, self-congruence and partner quality affect consumers’ trust and commitment toward companies’ brands, which in turn influence participation on brand microblogs. Further, we propose that gender has important moderating effects in our research model. We empirically test the research hypotheses through an online survey. The findings illustrate that self-congruence and partner quality have positive effects on trust and commitment. Trust affects commitment and participation, while participation is also influenced by commitment. More importantly, the effects of self-congruence on trust and commitment are found to be stronger for male consumers than females. In contrast, the effects of partner quality on trust and commitment are stronger for female consumers than males. Trust posits stronger effects on commitment and participation for males, while commitment has a stronger effect on participation for females. We believe that our findings contribute to the literature on consumer participation behavior and gender differences on brand microblogs. Companies can also apply our findings to strengthen their brand building and participation level of different consumers on their microblogs. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840171_18",
"text": "Inspired by ACTORS [7, 17], we have implemented an interpreter for a LISP-like language, SCHEME, based on the lambda calculus [2], but extended for side effects, multiprocessing, and process synchronization. The purpose of this implementation is tutorial. We wish to: 1. alleviate the confusion caused by Micro-PLANNER, CONNIVER, etc., by clarifying the embedding of non-recursive control structures in a recursive host language like LISP. 2. explain how to use these control structures, independent of such issues as pattern matching and data base manipulation. 3. have a simple concrete experimental domain for certain issues of programming semantics and style. This paper is organized into sections. The first section is a short “reference manual” containing specifications for all the unusual features of SCHEME. Next, we present a sequence of programming examples which illustrate various programming styles, and how to use them. This will raise certain issues of semantics which we will try to clarify with lambda calculus in the third section. In the fourth section we will give a general discussion of the issues facing an implementor of an interpreter for a language based on lambda calculus. Finally, we will present a completely annotated interpreter for SCHEME, written in MacLISP [13], to acquaint programmers with the tricks of the trade of implementing non-recursive control structures in a recursive language like LISP. This report describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the laboratory’s artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014-75-C0643. 1. The SCHEME Reference Manual SCHEME is essentially a full-funarg LISP. LAMBDAexpressions need not be QUOTEd, FUNCTIONed, or *FUNCTIONed when passed as arguments or returned as values; they will evaluate to closures of themselves. All LISP functions (i.e.,EXPRs,SUBRs, andLSUBRs, butnotFEXPRs,FSUBRs, orMACROs) are primitive operators in SCHEME, and have the same meaning as they have in LISP. Like LAMBDAexpressions, primitive operators and numbers are self-evaluating (they evaluate to trivial closures of themselves). There are a number of special primitives known as AINTs which are to SCHEME as FSUBRs are to LISP. We will enumerate them here. IF This is the primitive conditional operator. It takes three arguments. If the first evaluates to non-NIL , it evaluates the second expression, and otherwise the third. QUOTE As in LISP, this quotes the argument form so that it will be passed verbatim as data. The abbreviation “ ’FOO” may be used instead of “ (QUOTE FOO) ”. 406 SUSSMAN AND STEELE DEFINE This is analogous to the MacLISP DEFUNprimitive (but note that theLAMBDA must appear explicitly!). It is used for defining a function in the “global environment” permanently, as opposed to LABELS(see below), which is used for temporary definitions in a local environment.DEFINE takes a name and a lambda expression; it closes the lambda expression in the global environment and stores the closure in the LISP value cell of the name (which is a LISP atom). LABELS We have decided not to use the traditional LABEL primitive in this interpreter because it is difficult to define several mutually recursive functions using only LABEL. The solution, which Hewitt [17] also uses, is to adopt an ALGOLesque block syntax: (LABELS <function definition list> <expression>) This has the effect of evaluating the expression in an environment where all the functions are defined as specified by the definitions list. Furthermore, the functions are themselves closed in that environment, and not in the outer environment; this allows the functions to call themselvesand each otherecursively. For example, consider a function which counts all the atoms in a list structure recursively to all levels, but which doesn’t count the NIL s which terminate lists (but NIL s in theCARof some list count). In order to perform this we use two mutually recursive functions, one to count the car and one to count the cdr, as follows: (DEFINE COUNT (LAMBDA (L) (LABELS ((COUNTCAR (LAMBDA (L) (IF (ATOM L) 1 (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L)))))) (COUNTCDR (LAMBDA (L) (IF (ATOM L) (IF (NULL L) 0 1) (+ (COUNTCAR (CAR L)) (COUNTCDR (CDR L))))))) (COUNTCDR L)))) ;Note: COUNTCDR is defined here. ASET This is the side effect primitive. It is analogous to the LISP function SET. For example, to define a cell [17], we may useASETas follows: (DEFINE CONS-CELL (LAMBDA (CONTENTS) (LABELS ((THE-CELL (LAMBDA (MSG) (IF (EQ MSG ’CONTENTS?) CONTENTS (IF (EQ MSG ’CELL?) ’YES (IF (EQ (CAR MSG) ’<-) (BLOCK (ASET ’CONTENTS (CADR MSG)) THE-CELL) (ERROR ’|UNRECOGNIZED MESSAGE CELL| MSG ’WRNG-TYPE-ARG))))))) THE-CELL))) INTERPRETER FOR EXTENDED LAMBDA CALCULUS 407 Those of you who may complain about the lack of ASETQare invited to write(ASET’ foo bar) instead of(ASET ’foo bar) . EVALUATE This is similar to the LISP functionEVAL. It evaluates its argument, and then evaluates the resulting s-expression as SCHEME code. CATCH This is the “escape operator” which gives the user a handle on the control structure of the interpreter. The expression: (CATCH <identifier> <expression>) evaluates<expression> in an environment where <identifier> is bound to a continuation which is “just about to return from the CATCH”; that is, if the continuation is called as a function of one argument, then control proceeds as if the CATCHexpression had returned with the supplied (evaluated) argument as its value. For example, consider the following obscure definition of SQRT(Sussman’s favorite style/Steele’s least favorite): (DEFINE SQRT (LAMBDA (X EPSILON) ((LAMBDA (ANS LOOPTAG) (CATCH RETURNTAG (PROGN (ASET ’LOOPTAG (CATCH M M)) ;CREATE PROG TAG (IF (< (ABS (-$ (*$ ANS ANS) X)) EPSILON) (RETURNTAG ANS) ;RETURN NIL) ;JFCL (ASET ’ANS (//$ (+$ (//$ X ANS) ANS) 2.0)) (LOOPTAG LOOPTAG)))) ;GOTO 1.0 NIL))) Anyone who doesn’t understand how this manages to work probably should not attempt to useCATCH. As another example, we can define a THROWfunction, which may then be used with CATCHmuch as they are in LISP: (DEFINE THROW (LAMBDA (TAG RESULT) (TAG RESULT))) CREATE!PROCESS This is the process generator for multiprocessing. It takes one argument, an expression to be evaluated in the current environment as a separate parallel process. If the expression ever returns a value, the process automatically terminates. The value ofCREATE!PROCESSis a process id for the newly generated process. Note that the newly created process will not actually run until it is explicitly started. START!PROCESS This takes one argument, a process id, and starts up that process. It then runs. 408 SUSSMAN AND STEELE STOP!PROCESS This also takes a process id, but stops the process. The stopped process may be continued from where it was stopped by using START!PROCESSagain on it. The magic global variable**PROCESS** always contains the process id of the currently running process; thus a process can stop itself by doing (STOP!PROCESS **PROCESS**) . A stopped process is garbage collected if no live process has a pointer to its process id. EVALUATE!UNINTERRUPTIBLY This is the synchronization primitive. It evaluates an expression uninterruptibly; i.e., no other process may run until the expression has returned a value. Note that if a funarg is returned from the scope of an EVALUATE!UNINTERRUPTIBLY, then that funarg will be uninterruptible when it is applied; that is, the uninterruptibility property follows the rules of variable scoping. For example, consider the following function: (DEFINE SEMGEN (LAMBDA (SEMVAL) (LIST (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (ASET’ SEMVAL (+ SEMVAL 1)))) (LABELS (P (LAMBDA () (EVALUATE!UNINTERRUPTIBLY (IF (PLUSP SEMVAL) (ASET’ SEMVAL (SEMVAL 1)) (P))))) P)))) This returns a pair of functions which are V and P operations on a newly created semaphore. The argument to SEMGENis the initial value for the semaphore. Note that P busy-waits by iterating if necessary; because EVALUATE!UNINTERRUPTIBLYuses variable-scoping rules, other processes have a chance to get in at the beginning of each iteration. This busy-wait can be made much more efficient by replacing the expression (P) in the definition ofP with ((LAMBDA (ME) (BLOCK (START!PROCESS (CREATE!PROCESS ’(START!PROCESS ME))) (STOP!PROCESS ME) (P))) **PROCESS**) Let’s see you figure this one out! Note that a STOP!PROCESSwithin anEVALUATE! UNINTERRUPTIBLYforces the process to be swapped out even if it is the current one, and so other processes get to run; but as soon as it gets swapped in again, others are locked out as before. Besides theAINTs, SCHEME has a class of primitives known as AMACRO s These are similar to MacLISPMACROs, in that they are expanded into equivalent code before being executed. Some AMACRO s supplied with the SCHEME interpreter: INTERPRETER FOR EXTENDED LAMBDA CALCULUS 409 COND This is like the MacLISPCONDstatement, except that singleton clauses (where the result of the predicate is the returned value) are not allowed. AND, OR These are also as in MacLISP. BLOCK This is like the MacLISPPROGN, but arranges to evaluate its last argument without an extra net control frame (explained later), so that the last argument may involved in an iteration. Note that in SCHEME, unlike MacLISP, the body of a LAMBDAexpression is not an implicit PROGN. DO This is like the MacLISP “new-style” DO; old-styleDOis not supported. AMAPCAR , AMAPLIST These are likeMAPCARandMAPLIST, but they expect a SCHEME lambda closure for the first argument. To use SCHEME, simply incant at DDT (on MIT-AI): 3",
"title": ""
}
] |
1840172 | Classifying Lexical-semantic Relationships by Exploiting Sense / Concept Representations | [
{
"docid": "pos:1840172_0",
"text": "We present AutoExtend, a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset/lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks.",
"title": ""
},
{
"docid": "pos:1840172_1",
"text": "Semantic hierarchy construction aims to build structures of concepts linked by hypernym–hyponym (“is-a”) relations. A major challenge for this task is the automatic discovery of such relations. This paper proposes a novel and effective method for the construction of semantic hierarchies based on word embeddings, which can be used to measure the semantic relationship between words. We identify whether a candidate word pair has hypernym–hyponym relation by using the word-embedding-based semantic projections between words and their hypernyms. Our result, an F-score of 73.74%, outperforms the state-of-theart methods on a manually labeled test dataset. Moreover, combining our method with a previous manually-built hierarchy extension method can further improve Fscore to 80.29%.",
"title": ""
}
] | [
{
"docid": "neg:1840172_0",
"text": "Multi-agent cooperation is an important feature of the natural world. Many tasks involve individual incentives that are misaligned with the common good, yet a wide range of organisms from bacteria to insects and humans are able to overcome their differences and collaborate. Therefore, the emergence of cooperative behavior amongst self-interested individuals is an important question for the fields of multi-agent reinforcement learning (MARL) and evolutionary theory. Here, we study a particular class of multiagent problems called intertemporal social dilemmas (ISDs), where the conflict between the individual and the group is particularly sharp. By combining MARL with appropriately structured natural selection, we demonstrate that individual inductive biases for cooperation can be learned in a model-free way. To achieve this, we introduce an innovative modular architecture for deep reinforcement learning agents which supports multi-level selection. We present results in two challenging environments, and interpret these in the context of cultural and ecological evolution.",
"title": ""
},
{
"docid": "neg:1840172_1",
"text": "Internet of Things (IoT) communication is vital for the developing of smart communities. The rapid growth of IoT depends on reliable wireless networks. The evolving 5G cellular system addresses this challenge by adopting cloud computing technology in Radio Access Network (RAN); namely Cloud RAN or CRAN. CRAN enables better scalability, flexibility, and performance that allows 5G to provide connectivity for the vast volume of IoT devices envisioned for smart cities. This work investigates the load balance (LB) problem in CRAN, with the goal of reducing latencies experienced by IoT communications. Eight practical LB algorithms are studied and evaluated in CRAN environment, based on real cellular network traffic characteristics provided by Nokia Research. Experiment results on queue-length analysis show that the simple, light-weight queue-based LB is almost as effectively as the much more complex waiting-time-based LB. We believe that this study is significant in enabling 5G networks for providing IoT communication backbone in the emerging smart communities; it also has wide applications in other distributed systems.",
"title": ""
},
{
"docid": "neg:1840172_2",
"text": "The use of herbal medicinal products and supplements has increased tremendously over the past three decades with not less than 80% of people worldwide relying on them for some part of primary healthcare. Although therapies involving these agents have shown promising potential with the efficacy of a good number of herbal products clearly established, many of them remain untested and their use are either poorly monitored or not even monitored at all. The consequence of this is an inadequate knowledge of their mode of action, potential adverse reactions, contraindications, and interactions with existing orthodox pharmaceuticals and functional foods to promote both safe and rational use of these agents. Since safety continues to be a major issue with the use of herbal remedies, it becomes imperative, therefore, that relevant regulatory authorities put in place appropriate measures to protect public health by ensuring that all herbal medicines are safe and of suitable quality. This review discusses toxicity-related issues and major safety concerns arising from the use of herbal medicinal products and also highlights some important challenges associated with effective monitoring of their safety.",
"title": ""
},
{
"docid": "neg:1840172_3",
"text": "Industry 4.0 has become more popular due to recent developments in cyber-physical systems, big data, cloud computing, and industrial wireless networks. Intelligent manufacturing has produced a revolutionary change, and evolving applications, such as product lifecycle management, are becoming a reality. In this paper, we propose and implement a manufacturing big data solution for active preventive maintenance in manufacturing environments. First, we provide the system architecture that is used for active preventive maintenance. Then, we analyze the method used for collection of manufacturing big data according to the data characteristics. Subsequently, we perform data processing in the cloud, including the cloud layer architecture, the real-time active maintenance mechanism, and the offline prediction and analysis method. Finally, we analyze a prototype platform and implement experiments to compare the traditionally used method with the proposed active preventive maintenance method. The manufacturing big data method used for active preventive maintenance has the potential to accelerate implementation of Industry 4.0.",
"title": ""
},
{
"docid": "neg:1840172_4",
"text": "Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically, we are interested in finding what part of the input (pixels in images or words in questions) the VQA model focuses on while answering the question. To tackle this problem, we use two visualization techniques – guided backpropagation and occlusion – to find important words in the question and important regions in the image. We then present qualitative and quantitative analyses of these importance maps. We found that even without explicit attention mechanisms, VQA models may sometimes be implicitly attending to relevant regions in the image, and often to appropriate words in the question.",
"title": ""
},
{
"docid": "neg:1840172_5",
"text": "This article presents guiding principles for the assessment of competence developed by the members of the American Psychological Association’s Task Force on Assessment of Competence in Professional Psychology. These principles are applicable to the education, training, and credentialing of professional psychologists, and to practicing psychologists across the professional life span. The principles are built upon a review of competency assessment models, including practices in both psychology and other professions. These principles will help to ensure that psychologists reinforce the importance of a culture of competence. The implications of the principles for professional psychology also are highlighted.",
"title": ""
},
{
"docid": "neg:1840172_6",
"text": "Revealing the latent community structure, which is crucial to understanding the features of networks, is an important problem in network and graph analysis. During the last decade, many approaches have been proposed to solve this challenging problem in diverse ways, i.e. different measures or data structures. Unfortunately, experimental reports on existing techniques fell short in validity and integrity since many comparisons were not based on a unified code base or merely discussed in theory. We engage in an in-depth benchmarking study of community detection in social networks. We formulate a generalized community detection procedure and propose a procedure-oriented framework for benchmarking. This framework enables us to evaluate and compare various approaches to community detection systematically and thoroughly under identical experimental conditions. Upon that we can analyze and diagnose the inherent defect of existing approaches deeply, and further make effective improvements correspondingly. We have re-implemented ten state-of-the-art representative algorithms upon this framework and make comprehensive evaluations of multiple aspects, including the efficiency evaluation, performance evaluations, sensitivity evaluations, etc. We discuss their merits and faults in depth, and draw a set of take-away interesting conclusions. In addition, we present how we can make diagnoses for these algorithms resulting in significant improvements.",
"title": ""
},
{
"docid": "neg:1840172_7",
"text": "A straight-line drawing of a plane graph is called an open rectangle-of-influence drawing if there is no vertex in the proper inside of the axis-parallel rectangle defined by the two ends of every edge. In an inner triangulated plane graph, every inner face is a triangle although the outer face is not always a triangle. In this paper, we first obtain a sufficient condition for an inner triangulated plane graph G to have an open rectangle-of-influence drawing; the condition is expressed in terms of a labeling of angles of a subgraph of G. We then present an O(n/log n)-time algorithm to examine whether G satisfies the condition and, if so, construct an open rectangle-of-influence drawing of G on an (n − 1) × (n − 1) integer grid, where n is the number of vertices in G.",
"title": ""
},
{
"docid": "neg:1840172_8",
"text": "Deep neural networks (NNs) are powerful black box predictors that have recently achieved impressive performance on a wide spectrum of tasks. Quantifying predictive uncertainty in NNs is a challenging and yet unsolved problem. Bayesian NNs, which learn a distribution over weights, are currently the state-of-the-art for estimating predictive uncertainty; however these require significant modifications to the training procedure and are computationally expensive compared to standard (non-Bayesian) NNs. We propose an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates. Through a series of experiments on classification and regression benchmarks, we demonstrate that our method produces well-calibrated uncertainty estimates which are as good or better than approximate Bayesian NNs. To assess robustness to dataset shift, we evaluate the predictive uncertainty on test examples from known and unknown distributions, and show that our method is able to express higher uncertainty on out-of-distribution examples. We demonstrate the scalability of our method by evaluating predictive uncertainty estimates on ImageNet.",
"title": ""
},
{
"docid": "neg:1840172_9",
"text": "A planar monopole having a small size yet providing two wide bands for covering the eight-band LTE/GSM/UMTS operation in the mobile phone is presented. The small-size yet wideband operation is achieved by exciting the antenna's wide radiating plate using a coupling feed and short-circuiting it to the system ground plane of the mobile phone through a long meandered strip as an inductive shorting strip. The coupling feed leads to a wide operating band to cover the frequency range of 1710-2690 MHz for the GSM1800/1900/UMTS/LTE2300/2500 operation. The inductive shorting strip results in the generation of a wide operating band to cover the frequency range of 698-960 MHz for the LTE700/GSM850/900 operation. The planar monopole can be directly printed on the no-ground portion of the system circuit board of the mobile phone and is promising to be integrated with a practical loudspeaker. The antenna's radiating plate can also be folded into a thin structure (3 mm only) to occupy a small volume of 3 × 6 × 40 mm3 (0.72 cm3) for the eight-band LTE/GSM/UMTS operation; in this case, including the 8-mm feed gap, the antenna shows a low profile of 14 mm to the ground plane of the mobile phone. The proposed antenna, including its planar and folded structures, are suitable for slim mobile phone applications.",
"title": ""
},
{
"docid": "neg:1840172_10",
"text": "Game engine is the core of game development. Unity3D is a game engine that supports the development on multiple platforms including web, mobiles, etc. The main technology characters of Unity3D are introduced firstly. The component model, event-driven model and class relationships in Unity3D are analyzed. Finally, a generating NPCs algorithm and a shooting algorithm are respectively presented to show common key technologies in Unity3D.",
"title": ""
},
{
"docid": "neg:1840172_11",
"text": "Improving the quality of end-of-life care for hospitalized patients is a priority for healthcare organizations. Studies have shown that physicians tend to over-estimate prognoses, which in combination with treatment inertia results in a mismatch between patients wishes and actual care at the end of life. We describe a method to address this problem using Deep Learning and Electronic Health Record (EHR) data, which is currently being piloted, with Institutional Review Board approval, at an academic medical center. The EHR data of admitted patients are automatically evaluated by an algorithm, which brings patients who are likely to benefit from palliative care services to the attention of the Palliative Care team. The algorithm is a Deep Neural Network trained on the EHR data from previous years, to predict all-cause 3–12 month mortality of patients as a proxy for patients that could benefit from palliative care. Our predictions enable the Palliative Care team to take a proactive approach in reaching out to such patients, rather than relying on referrals from treating physicians, or conduct time consuming chart reviews of all patients. We also present a novel interpretation technique which we use to provide explanations of the model's predictions.",
"title": ""
},
{
"docid": "neg:1840172_12",
"text": "The Web provides a fertile ground for word-of-mouth communication and more and more consumers write about and share product-related experiences online. Given the experiential nature of tourism, such first-hand knowledge communicated by other travelers is especially useful for travel decision making. However, very little is known about what motivates consumers to write online travel reviews. A Web-based survey using an online consumer panel was conducted to investigate consumers’ motivations to write online travel reviews. Measurement scales to gauge the motivations to contribute online travel reviews were developed and tested. The results indicate that online travel review writers are mostly motivated by helping a travel service provider, concerns for other consumers, and needs for enjoyment/positive self-enhancement. Venting negative feelings through postings is clearly not seen as an important motive. Motivational differences were found for gender and income level. Implications of the findings for online travel communities and tourism marketers are discussed.",
"title": ""
},
{
"docid": "neg:1840172_13",
"text": "A Search Join is a join operation which extends a user-provided table with additional attributes based on a large corpus of heterogeneous data originating from the Web or corporate intranets. Search Joins are useful within a wide range of application scenarios: Imagine you are an analyst having a local table describing companies and you want to extend this table with attributes containing the headquarters, turnover, and revenue of each company. Or imagine you are a film enthusiast and want to extend a table describing films with attributes like director, genre, and release date of each film. This article presents the Mannheim Search Join Engine which automatically performs such table extension operations based on a large corpus of Web data. Given a local table, the Mannheim Search Join Engine searches the corpus for additional data describing the entities contained in the input table. The discovered data is then joined with the local table and is consolidated using schema matching and data fusion techniques. As result, the user is presented with an extended table and given the opportunity to examine the provenance of the added data. We evaluate the Mannheim Search Join Engine using heterogeneous data originating from over one million different websites. The data corpus consists of HTML tables, as well as Linked Data and Microdata annotations which are converted into tabular form. Our experiments show that the Mannheim Search Join Engine achieves a coverage close to 100% and a precision of around 90% for the tasks of extending tables describing cities, companies, countries, drugs, books, films, and songs.",
"title": ""
},
{
"docid": "neg:1840172_14",
"text": "Universal schema predicts the types of entities and relations in a knowledge base (KB) by jointly embedding the union of all available schema types—not only types from multiple structured databases (such as Freebase or Wikipedia infoboxes), but also types expressed as textual patterns from raw text. This prediction is typically modeled as a matrix completion problem, with one type per column, and either one or two entities per row (in the case of entity types or binary relation types, respectively). Factorizing this sparsely observed matrix yields a learned vector embedding for each row and each column. In this paper we explore the problem of making predictions for entities or entity-pairs unseen at training time (and hence without a pre-learned row embedding). We propose an approach having no per-row parameters at all; rather we produce a row vector on the fly using a learned aggregation function of the vectors of the observed columns for that row. We experiment with various aggregation functions, including neural network attention models. Our approach can be understood as a natural language database, in that questions about KB entities are answered by attending to textual or database evidence. In experiments predicting both relations and entity types, we demonstrate that despite having an order of magnitude fewer parameters than traditional universal schema, we can match the accuracy of the traditional model, and more importantly, we can now make predictions about unseen rows with nearly the same accuracy as rows available at training time.",
"title": ""
},
{
"docid": "neg:1840172_15",
"text": "In edge computing, content and service providers aim at enhancing user experience by providing services closer to the user. At the same time, infrastructure providers such as access ISPs aim at utilizing their infrastructure by selling edge resources to these content and service providers. In this context, auctions are widely used to set a price that reflects supply and demand in a fair way. In this work, we propose RAERA, the first robust auction scheme for edge resource allocation that is suitable to work with the market uncertainty typical for edge resources---here, customers typically have different valuation distribution for a wide range of heterogeneous resources. Additionally, RAERA encourages truthful bids and allows the infrastructure provider to maximize its break-even profit. Our preliminary evaluations highlight that REARA offers a time dependent fair price. Sellers can achieve higher revenue in the range of 5%-15% irrespective of varying demands and the buyers pay up to 20% lower than their top bid amount.",
"title": ""
},
{
"docid": "neg:1840172_16",
"text": "Small-cell lung cancer (SCLC) is an aggressive malignancy associated with a poor prognosis. First-line treatment has remained unchanged for decades, and a paucity of effective treatment options exists for recurrent disease. Nonetheless, advances in our understanding of SCLC biology have led to the development of novel experimental therapies. Poly [ADP-ribose] polymerase (PARP) inhibitors have shown promise in preclinical models, and are under clinical investigation in combination with cytotoxic therapies and inhibitors of cell-cycle checkpoints.Preclinical data indicate that targeting of histone-lysine N-methyltransferase EZH2, a regulator of chromatin remodelling implicated in acquired therapeutic resistance, might augment and prolong chemotherapy responses. High expression of the inhibitory Notch ligand Delta-like protein 3 (DLL3) in most SCLCs has been linked to expression of Achaete-scute homologue 1 (ASCL1; also known as ASH-1), a key transcription factor driving SCLC oncogenesis; encouraging preclinical and clinical activity has been demonstrated for an anti-DLL3-antibody–drug conjugate. The immune microenvironment of SCLC seems to be distinct from that of other solid tumours, with few tumour-infiltrating lymphocytes and low levels of the immune-checkpoint protein programmed cell death 1 ligand 1 (PD-L1). Nonetheless, immunotherapy with immune-checkpoint inhibitors holds promise for patients with this disease, independent of PD-L1 status. Herein, we review the progress made in uncovering aspects of the biology of SCLC and its microenvironment that are defining new therapeutic strategies and offering renewed hope for patients.",
"title": ""
},
{
"docid": "neg:1840172_17",
"text": "User simulators are a principal offline method for training and evaluating human-computer dialog systems. In this paper, we examine simple sequence-to-sequence neural network architectures for training end-to-end, natural language to natural language, user simulators, using only raw logs of previous interactions without any additional human labelling. We compare the neural network-based simulators with a language model (LM)-based approach for creating natural language user simulators. Using both an automatic evaluation using LM perplexity and a human evaluation, we demonstrate that the sequence-tosequence approaches outperform the LM-based method. We show correlation between LM perplexity and the human evaluation on this task, and discuss the benefits of different neural network architecture variations.",
"title": ""
},
{
"docid": "neg:1840172_18",
"text": "Face recognition systems are susceptible to presentation attacks such as printed photo attacks, replay attacks, and 3D mask attacks. These attacks, primarily studied in visible spectrum, aim to obfuscate or impersonate a person's identity. This paper presents a unique multispectral video face database for face presentation attack using latex and paper masks. The proposed Multispectral Latex Mask based Video Face Presentation Attack (MLFP) database contains 1350 videos in visible, near infrared, and thermal spectrums. Since the database consists of videos of subjects without any mask as well as wearing ten different masks, the effect of identity concealment is analyzed in each spectrum using face recognition algorithms. We also present the performance of existing presentation attack detection algorithms on the proposed MLFP database. It is observed that the thermal imaging spectrum is most effective in detecting face presentation attacks.",
"title": ""
},
{
"docid": "neg:1840172_19",
"text": "The paper presents an artificial neural network based approach in support of cash demand forecasting for automatic teller machine (ATM). On the start phase a three layer feed-forward neural network was trained using Levenberg-Marquardt algorithm and historical data sets. Then ANN was retuned every week using the last observations from ATM. The generalization properties of the ANN were improved using regularization term which penalizes large values of the ANN weights. Regularization term was adapted online depending on complexity of relationship between input and output variables. Performed simulation and experimental tests have showed good forecasting capacities of ANN. At current stage the proposed procedure is in the implementing phase for cash management tasks in ATM network. Key-Words: neural networks, automatic teller machine, cash forecasting",
"title": ""
}
] |
1840173 | The effect of Gamified mHealth App on Exercise Motivation and Physical Activity | [
{
"docid": "pos:1840173_0",
"text": "BACKGROUND\nMobile phone health apps may now seem to be ubiquitous, yet much remains unknown with regard to their usage. Information is limited with regard to important metrics, including the percentage of the population that uses health apps, reasons for adoption/nonadoption, and reasons for noncontinuance of use.\n\n\nOBJECTIVE\nThe purpose of this study was to examine health app use among mobile phone owners in the United States.\n\n\nMETHODS\nWe conducted a cross-sectional survey of 1604 mobile phone users throughout the United States. The 36-item survey assessed sociodemographic characteristics, history of and reasons for health app use/nonuse, perceived effectiveness of health apps, reasons for stopping use, and general health status.\n\n\nRESULTS\nA little over half (934/1604, 58.23%) of mobile phone users had downloaded a health-related mobile app. Fitness and nutrition were the most common categories of health apps used, with most respondents using them at least daily. Common reasons for not having downloaded apps were lack of interest, cost, and concern about apps collecting their data. Individuals more likely to use health apps tended to be younger, have higher incomes, be more educated, be Latino/Hispanic, and have a body mass index (BMI) in the obese range (all P<.05). Cost was a significant concern among respondents, with a large proportion indicating that they would not pay anything for a health app. Interestingly, among those who had downloaded health apps, trust in their accuracy and data safety was quite high, and most felt that the apps had improved their health. About half of the respondents (427/934, 45.7%) had stopped using some health apps, primarily due to high data entry burden, loss of interest, and hidden costs.\n\n\nCONCLUSIONS\nThese findings suggest that while many individuals use health apps, a substantial proportion of the population does not, and that even among those who use health apps, many stop using them. These data suggest that app developers need to better address consumer concerns, such as cost and high data entry burden, and that clinical trials are necessary to test the efficacy of health apps to broaden their appeal and adoption.",
"title": ""
},
{
"docid": "pos:1840173_1",
"text": "The global obesity epidemic has prompted our community to explore the potential for technology to play a stronger role in promoting healthier lifestyles. Although there are several examples of successful games based on focused physical interaction, persuasive applications that integrate into everyday life have had more mixed results. This underscores a need for designs that encourage physical activity while addressing fun, sustainability, and behavioral change. This note suggests a new perspective, inspired in part by the social nature of many everyday fitness applications and by the successful encouragement of long term play in massively multiplayer online games. We first examine the game design literature to distill a set of principles for discussing and comparing applications. We then use these principles to analyze an existing application. Finally, we present Kukini, a design for an everyday fitness game.",
"title": ""
},
{
"docid": "pos:1840173_2",
"text": "Gamification is the \"use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A frequently used model for gamification is to equate an activity in the non-game context with points and have external rewards for reaching specified point thresholds. One significant problem with this model of gamification is that it can reduce the internal motivation that the user has for the activity, as it replaces internal motivation with external motivation. If, however, the game design elements can be made meaningful to the user through information, then internal motivation can be improved as there is less need to emphasize external rewards. This paper introduces the concept of meaningful gamification through a user-centered exploration of theories behind organismic integration theory, situational relevance, situated motivational affordance, universal design for learning, and player-generated content. A Brief Introduction to Gamification One definition of gamification is \"the use of game design elements in non-game contexts\" (Deterding et al, 2011, p.1). A common implementation of gamification is to take the scoring elements of video games, such as points, levels, and achievements, and apply them to a work or educational context. While the term is relatively new, the concept has been around for some time through loyalty systems like frequent flyer miles, green stamps, and library summer reading programs. These gamification programs can increase the use of a service and change behavior, as users work toward meeting these goals to reach external rewards (Zichermann & Cunningham, 2011, p. 27). Gamification has met with significant criticism by those who study games. One problem is with the name. By putting the term \"game\" first, it implies that the entire activity will become an engaging experience, when, in reality, gamification typically uses only the least interesting part of a game the scoring system. The term \"pointsification\" has been suggested as a label for gamification systems that add nothing more than a scoring system to a non-game activity (Robertson, 2010). One definition of games is \"a form of play with goals and structure\" (Maroney, 2001); the points-based gamification focuses on the goals and leaves the play behind. Ian Bogost suggests the term be changed to \"exploitationware,\" as that is a better description of what is really going on (2011). The underlying message of these criticisms of gamification is that there are more effective ways than a scoring system to engage users. Another concern is that organizations getting involved with gamification are not aware of the potential long-term negative impact of gamification. Underlying the concept of gamification is motivation. People can be driven to do something because of internal or external motivation. A meta-analysis by Deci, Koestner, and Ryan of 128 studies that examined motivation in educational settings found that almost all forms of rewards (except for non-controlling verbal rewards) reduced internal motivation (2001). The implication of this is that once gamification is used to provide external motivation, the user's internal motivation decreases. If the organization starts using gamification based upon external rewards and then decides to stop the rewards program, that organization will be worse off than when it started as users will be less likely to return to the behavior without the external reward (Deci, Koestner & Ryan, 2001). In the book Gamification by Design, the authors claim that this belief in internal motivation over extrinsic rewards is unfounded, and gamification can be used for organizations to control the behavior of users by replacing those internal motivations with extrinsic rewards. They do admit, though, that \"once you start giving someone a reward, you have to keep her in that reward loop forever\" (Zichermann & Cunningham, 2011, p. 27). Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. Further exploration of the meta-analysis of motivational literature in education found that if the task was already uninteresting, reward systems did not reduce internal motivation, as there was little internal motivation to start with. The authors concluded that \"the issue is how to facilitate people's understanding the importance of the activity to themselves and thus internalizing its regulation so they will be selfmotivated to perform it\" (2001, p. 15). The goal of this paper is to explore theories useful in user-centered gamification that is meaningful to the user and therefore does not depend upon external rewards. Organismic Integration Theory Organismic Integration Theory (OIT) is a sub-theory of self-determination theory out of the field of Education created by Deci and Ryan (2004). Self-determination theory is focused on what drives an individual to make choices without external influence. OIT explores how different types of external motivations can be integrated with the underlying activity into someone’s own sense of self. Rather than state that motivations are either internalized or not, this theory presents a continuum based upon how much external control is integrated along with the desire to perform the activity. If there is heavy external control provided with a reward, then aspects of that external control will be internalized as well, while if there is less external control that goes along with the adaptation of an activity, then the activity will be more self-regulated. External rewards unrelated to the activity are the least likely to be integrated, as the perception is that someone else is controlling the individual’s behavior. Rewards based upon gaining or losing status that tap into the ego create an introjected regulation of behavior, and while this can be intrinsically accepted, the controlling aspect of these rewards causes the loss of internal motivation. Allowing users to selfidentify with goals or groups that are meaningful is much more likely to produce autonomous, internalized behaviors, as the user is able to connect these goals to other values he or she already holds. A user who has fully integrated the activity along with his or her personal goals and needs is more likely to see the activity as positive than if there is external control integrated with the activity (Deci & Ryan, 2004). OIT speaks to the importance of creating a gamification system that is meaningful to the user, assuming that the goal of the system is to create long-term systemic change where the users feel positive about engaging in the non-game activity. On the other side, if too many external controls are integrated with the activity, the user can have negative feelings about engaging in the activity. To avoid negative feelings, the game-based elements of the activity need to be meaningful and rewarding without the need for external rewards. In order for these activities to be meaningful to a specific user, however, they have to be relevant to that user. Situational Relevance and Situated Motivational Affordance One of the key research areas in Library and Information Science has been about the concept of relevance as related to information retrieval. A user has an information need, and a relevant document is one that resolves some of that information need. The concept of relevance is important in determining the effectiveness of search tools and algorithms. Many research projects that have compared search tools looked at the same query posed to different systems, and then used judges to determine what was a \"relevant\" response to that query. This approach has been heavily critiqued, as there are many variables that affect if a user finds something relevant at that moment in his or her searching process. Schamber reviewed decades of research to find generalizable criteria that could be used to determine what is truly relevant to a query and came to the conclusion that the only way to know if something is relevant is to ask the user (1994). Two users with the same search query will have different information backgrounds, so that a document that is relevant for one user may not be relevant to another user. This concept of \"situational relevance\" is important when thinking about gamification. When someone else creates goals for a user, it is akin to an external judge deciding what is relevant to a query. Without involving the user, there is no way to know what goals are relevant to a user's background, interest, or needs. In a points-based gamification system, the goal of scoring points is less likely to be relevant to a user if the activity that the points measure is not relevant to that user. For example, in a hybrid automobile, the gamification systems revolve around conservation and the point system can reflect how much energy is being saved. If the concept of saving energy is relevant to a user, then a point system Preprint of: Nicholson, S. (2012, June). A User-Centered Theoretical Framework for Meaningful Gamification. Paper Presented at Games+Learning+Society 8.0, Madison, WI. based upon that concept will also be relevant to that user. If the user is not internally concerned with saving energy, then a gamification system based upon saving energy will not be relevant to that user. There may be other elements of the driving experience that are of interest to a user, so if each user can select what aspect of the driving experience is measured, more users will find the system to be relevant. By involving the user in the creation or customization of the gamification system, the user can select or create meaningful game elements and goals that fall in line with their own interests. A related theory out of Human-Computer Interaction that has been applied to gamification is “situated motivational affordance” (Deterding, 2011b). This model was designed to help gamification designers consider the context of each o",
"title": ""
}
] | [
{
"docid": "neg:1840173_0",
"text": "Identifying students’ learning styles has several benefits such as making students aware of their strengths and weaknesses when it comes to learning and the possibility to personalize their learning environment to their learning styles. While there exist learning style questionnaires for identifying a student’s learning style, such questionnaires have several disadvantages and therefore, research has been conducted on automatically identifying learning styles from students’ behavior in a learning environment. Current approaches to automatically identify learning styles have an average precision between 66% and 77%, which shows the need for improvements in order to use such automatic approaches reliably in learning environments. In this paper, four computational intelligence algorithms (artificial neural network, genetic algorithm, ant colony system and particle swarm optimization) have been investigated with respect to their potential to improve the precision of automatic learning style identification. Each algorithm was evaluated with data from 75 students. The artificial neural network shows the most promising results with an average precision of 80.7%, followed by particle swarm optimization with an average precision of 79.1%. Improving the precision of automatic learning style identification allows more students to benefit from more accurate information about their learning styles as well as more accurate personalization towards accommodating their learning styles in a learning environment. Furthermore, teachers can have a better understanding of their students and be able to provide more appropriate interventions.",
"title": ""
},
{
"docid": "neg:1840173_1",
"text": "This paper describes a general methodology for extracting attribute-value pairs from web pages. It consists of two phases: candidate generation, in which syntactically likely attribute-value pairs are annotated; and candidate filtering, in which semantically improbable annotations are removed. We describe three types of candidate generators and two types of candidate filters, all of which are designed to be massively parallelizable. Our methods can handle 1 billion web pages in less than 6 hours with 1,000 machines. The best generator and filter combination achieves 70% F-measure compared to a hand-annotated corpus.",
"title": ""
},
{
"docid": "neg:1840173_2",
"text": "Massive (Multiple-Input–Multiple-Output) is a wireless technology which aims to serve several different devices simultaneously in the same frequency band through spatial multiplexing, made possible by using a large number of antennas at the base station. e many antennas facilitates efficient beamforming, based on channel estimates acquired from uplink reference signals, which allows the base station to transmit signals exactly where they are needed. e multiplexing together with the array gain from the beamforming can increase the spectral efficiency over contemporary systems. One challenge of practical importance is how to transmit data in the downlink when no channel state information is available. When a device initially joins the network, prior to transmiing uplink reference signals that enable beamforming, it needs system information—instructions on how to properly function within the network. It is transmission of system information that is the main focus of this thesis. In particular, the thesis analyzes how the reliability of the transmission of system information depends on the available amount of diversity. It is shown how downlink reference signals, space-time block codes, and power allocation can be used to improve the reliability of this transmission. In order to estimate the uplink and downlink channels from uplink reference signals, which is imperative to ensure scalability in the number of base station antennas, massive relies on channel reciprocity. is thesis shows that the principles of channel reciprocity can also be exploited by a jammer, a malicious transmier, aiming to disrupt legitimate communication between two devices. A heuristic scheme is proposed in which the jammer estimates the channel to a target device blindly, without any knowledge of the transmied legitimate signals, and subsequently beamforms noise towards the target. Under the same power constraint, the proposed jammer can disrupt the legitimate link more effectively than a conventional omnidirectional jammer in many cases.",
"title": ""
},
{
"docid": "neg:1840173_3",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "neg:1840173_4",
"text": "We conducted a literature review on systems that track learning analytics data (e.g., resource use, time spent, assessment data, etc.) and provide a report back to students in the form of visualizations, feedback, or recommendations. This review included a rigorous article search process; 945 articles were identified in the initial search. After filtering out articles that did not meet the inclusion criteria, 94 articles were included in the final analysis. Articles were coded on five categories chosen based on previous work done in this area: functionality, data sources, design analysis, perceived effects, and actual effects. The purpose of this review is to identify trends in the current student-facing learning analytics reporting system literature and provide recommendations for learning analytics researchers and practitioners for future work.",
"title": ""
},
{
"docid": "neg:1840173_5",
"text": "The design and construction of truly humanoid robots that can perceive and interact with the environment depends significantly on their perception capabilities. In this paper we present the Karlsruhe Humanoid Head, which has been designed to be used both as part of our humanoid robots ARMAR-IIIa and ARMAR-IIIb and as a stand-alone robot head for studying various visual perception tasks in the context of object recognition and human-robot interaction. The head has seven degrees of freedom (DoF). The eyes have a common tilt and can pan independently. Each eye is equipped with two digital color cameras, one with a wide-angle lens for peripheral vision and one with a narrow-angle lens for foveal vision to allow simple visuo-motor behaviors. Among these are tracking and saccadic motions towards salient regions, as well as more complex visual tasks such as hand-eye coordination. We present the mechatronic design concept, the motor control system, the sensor system and the computational system. To demonstrate the capabilities of the head, we present accuracy test results, and the implementation of both open-loop and closed-loop control on the head.",
"title": ""
},
{
"docid": "neg:1840173_6",
"text": "OBJECTIVE\nWe have previously reported an automated method for within-modality (e.g., PET-to-PET) image alignment. We now describe modifications to this method that allow for cross-modality registration of MRI and PET brain images obtained from a single subject.\n\n\nMETHODS\nThis method does not require fiducial markers and the user is not required to identify common structures on the two image sets. To align the images, the algorithm seeks to minimize the standard deviation of the PET pixel values that correspond to each MRI pixel value. The MR images must be edited to exclude nonbrain regions prior to using the algorithm.\n\n\nRESULTS AND CONCLUSION\nThe method has been validated quantitatively using data from patients with stereotaxic fiducial markers rigidly fixed in the skull. Maximal three-dimensional errors of < 3 mm and mean three-dimensional errors of < 2 mm were measured. Computation time on a SPARCstation IPX varies from 3 to 9 min to align MR image sets with [18F]fluorodeoxyglucose PET images. The MR alignment with noisy H2(15)O PET images typically requires 20-30 min.",
"title": ""
},
{
"docid": "neg:1840173_7",
"text": "Additive manufacturing (AM) alias 3D printing translates computer-aided design (CAD) virtual 3D models into physical objects. By digital slicing of CAD, 3D scan, or tomography data, AM builds objects layer by layer without the need for molds or machining. AM enables decentralized fabrication of customized objects on demand by exploiting digital information storage and retrieval via the Internet. The ongoing transition from rapid prototyping to rapid manufacturing prompts new challenges for mechanical engineers and materials scientists alike. Because polymers are by far the most utilized class of materials for AM, this Review focuses on polymer processing and the development of polymers and advanced polymer systems specifically for AM. AM techniques covered include vat photopolymerization (stereolithography), powder bed fusion (SLS), material and binder jetting (inkjet and aerosol 3D printing), sheet lamination (LOM), extrusion (FDM, 3D dispensing, 3D fiber deposition, and 3D plotting), and 3D bioprinting. The range of polymers used in AM encompasses thermoplastics, thermosets, elastomers, hydrogels, functional polymers, polymer blends, composites, and biological systems. Aspects of polymer design, additives, and processing parameters as they relate to enhancing build speed and improving accuracy, functionality, surface finish, stability, mechanical properties, and porosity are addressed. Selected applications demonstrate how polymer-based AM is being exploited in lightweight engineering, architecture, food processing, optics, energy technology, dentistry, drug delivery, and personalized medicine. Unparalleled by metals and ceramics, polymer-based AM plays a key role in the emerging AM of advanced multifunctional and multimaterial systems including living biological systems as well as life-like synthetic systems.",
"title": ""
},
{
"docid": "neg:1840173_8",
"text": "Network analysis has an increasing role in our effort to understand the complexity of biological systems. This is because of our ability to generate large data sets, where the interaction or distance between biological components can be either measured experimentally or calculated. Here we describe the use of BioLayout Express3D, an application that has been specifically designed for the integration, visualization and analysis of large network graphs derived from biological data. We describe the basic functionality of the program and its ability to display and cluster large graphs in two- and three-dimensional space, thereby rendering graphs in a highly interactive format. Although the program supports the import and display of various data formats, we provide a detailed protocol for one of its unique capabilities, the network analysis of gene expression data and a more general guide to the manipulation of graphs generated from various other data types.",
"title": ""
},
{
"docid": "neg:1840173_9",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "neg:1840173_10",
"text": "A high level of sustained personal plaque control is fundamental for successful treatment outcomes in patients with active periodontal disease and, hence, oral hygiene instructions are the cornerstone of periodontal treatment planning. Other risk factors for periodontal disease also should be identified and modified where possible. Many restorative dental treatments in particular require the establishment of healthy periodontal tissues for their clinical success. Failure by patients to control dental plaque because of inappropriate designs and materials for restorations and prostheses will result in the long-term failure of the restorations and the loss of supporting tissues. Periodontal treatment planning considerations are also very relevant to endodontic, orthodontic and osseointegrated dental implant conditions and proposed therapies.",
"title": ""
},
{
"docid": "neg:1840173_11",
"text": "Knowing the user’s point of gaze has significant potential to enhance current human-computer interfaces, given that eye movements can be used as an indicator of the attentional state of a user. The primary obstacle of integrating eye movements into today’s interfaces is the availability of a reliable, low-cost open-source eye-tracking system. Towards making such a system available to interface designers, we have developed a hybrid eye-tracking algorithm that integrates feature-based and model-based approaches and made it available in an open-source package. We refer to this algorithm as \"starburst\" because of the novel way in which pupil features are detected. This starburst algorithm is more accurate than pure feature-based approaches yet is signi?cantly less time consuming than pure modelbased approaches. The current implementation is tailored to tracking eye movements in infrared video obtained from an inexpensive head-mounted eye-tracking system. A validation study was conducted and showed that the technique can reliably estimate eye position with an accuracy of approximately one degree of visual angle.",
"title": ""
},
{
"docid": "neg:1840173_12",
"text": "Markov decision processes (MDPs) are powerful tools for decision making in uncertain dynamic environments. However, the solutions of MDPs are of limited practical use due to their sensitivity to distributional model parameters, which are typically unknown and have to be estimated by the decision maker. To counter the detrimental effects of estimation errors, we consider robust MDPs that offer probabilistic guarantees in view of the unknown parameters. To this end, we assume that an observation history of the MDP is available. Based on this history, we derive a confidence region that contains the unknown parameters with a pre-specified probability 1− β. Afterwards, we determine a policy that attains the highest worst-case performance over this confidence region. By construction, this policy achieves or exceeds its worst-case performance with a confidence of at least 1 − β. Our method involves the solution of tractable conic programs of moderate size. Notation For a finite set X = {1, . . . , X}, M(X ) denotes the probability simplex in R . An X -valued random variable χ has distribution m ∈ M(X ), denoted by χ ∼ m, if P(χ = x) = mx for all x ∈ X . By default, all vectors are column vectors. We denote by ek the kth canonical basis vector, while e denotes the vector whose components are all ones. In both cases, the dimension will usually be clear from the context. For square matrices A and B, the relation A B indicates that the matrix A − B is positive semidefinite. We denote the space of symmetric n × n matrices by S. The declaration f : X c 7→ Y (f : X a 7→ Y ) implies that f is a continuous (affine) function from X to Y . For a matrix A, we denote its ith row by Ai· (a row vector) and its jth column by A·j .",
"title": ""
},
{
"docid": "neg:1840173_13",
"text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.",
"title": ""
},
{
"docid": "neg:1840173_14",
"text": "We present a novel approach for vision-based road direction detection for autonomous Unmanned Ground Vehicles (UGVs). The proposed method utilizes only monocular vision information similar to human perception to detect road directions with respect to the vehicle. The algorithm searches for a global feature of the roads due to perspective projection (so-called vanishing point) to distinguish road directions. The proposed approach consists of two stages. The first stage estimates the vanishing-point locations from single frames. The second stage uses a Rao-Blackwellised particle filter to track initial vanishing-point estimations over a sequence of images in order to provide more robust estimation. Simultaneously, the direction of the road ahead of the vehicle is predicted, which is prerequisite information for vehicle steering and path planning. The proposed approach assumes minimum prior knowledge about the environment and can cope with complex situations such as ground cover variations, different illuminations, and cast shadows. Its performance is evaluated on video sequences taken during test run of the DARPA Grand Challenge.",
"title": ""
},
{
"docid": "neg:1840173_15",
"text": "The ultimate goal of this indoor mapping research is to automatically reconstruct a floorplan simply by walking through a house with a smartphone in a pocket. This paper tackles this problem by proposing FloorNet, a novel deep neural architecture. The challenge lies in the processing of RGBD streams spanning a large 3D space. FloorNet effectively processes the data through three neural network branches: 1) PointNet with 3D points, exploiting the 3D information; 2) CNN with a 2D point density image in a top-down view, enhancing the local spatial reasoning; and 3) CNN with RGB images, utilizing the full image information. FloorNet exchanges intermediate features across the branches to exploit the best of all the architectures. We have created a benchmark for floorplan reconstruction by acquiring RGBD video streams for 155 residential houses or apartments with Google Tango phones and annotating complete floorplan information. Our qualitative and quantitative evaluations demonstrate that the fusion of three branches effectively improves the reconstruction quality. We hope that the paper together with the benchmark will be an important step towards solving a challenging vector-graphics reconstruction problem. Code and data are available at https://github.com/art-programmer/FloorNet.",
"title": ""
},
{
"docid": "neg:1840173_16",
"text": "¶ Despite the widespread popularity of online opinion forums among consumers, the business value that such systems bring to organizations has, so far, remained an unanswered question. This paper addresses this question by studying the value of online movie ratings in forecasting motion picture revenues. First, we conduct a survey where a nationally representative sample of subjects who do not rate movies online is asked to rate a number of recent movies. Their ratings exhibit high correlation with online ratings for the same movies. We thus provide evidence for the claim that online ratings can be considered as a useful proxy for word-of-mouth about movies. Inspired by the Bass model of product diffusion, we then develop a motion picture revenue-forecasting model that incorporates the impact of both publicity and word of mouth on a movie's revenue trajectory. Using our model, we derive notably accurate predictions of a movie's total revenues from statistics of user reviews posted on Yahoo! Movies during the first week of a new movie's release. The results of our work provide encouraging evidence for the value of publicly available online forum information to firms for real-time forecasting and competitive analysis. ¶ This is a preliminary draft of a work in progress. It is being distributed to seminar participants for comments and discussion.",
"title": ""
},
{
"docid": "neg:1840173_17",
"text": "This research comprehensively illustrates the design, implementation and evaluation of a novel marker less environment tracking technology for an augmented reality based indoor navigation application, adapted to efficiently operate on a proprietary head-mounted display. Although the display device used, Google Glass, had certain pitfalls such as short battery life, slow processing speed, and lower quality visual display but the tracking technology was able to complement these limitations by rendering a very efficient, precise, and intuitive navigation experience. The performance assessments, conducted on the basis of efficiency and accuracy, substantiated the utility of the device for everyday navigation scenarios, whereas a later conducted subjective evaluation of handheld and wearable devices also corroborated the wearable as the preferred device for indoor navigation.",
"title": ""
},
{
"docid": "neg:1840173_18",
"text": "Database applications such as online transaction processing (OLTP) and decision support systems (DSS) constitute the largest and fastest-growing segment of the market for multiprocessor servers. However, most current system designs have been optimized to perform well on scientific and engineering workloads. Given the radically different behavior of database workloads (especially OLTP), it is important to re-evaluate key system design decisions in the context of this important class of applications.This paper examines the behavior of database workloads on shared-memory multiprocessors with aggressive out-of-order processors, and considers simple optimizations that can provide further performance improvements. Our study is based on detailed simulations of the Oracle commercial database engine. The results show that the combination of out-of-order execution and multiple instruction issue is indeed effective in improving performance of database workloads, providing gains of 1.5 and 2.6 times over an in-order single-issue processor for OLTP and DSS, respectively. In addition, speculative techniques enable optimized implementations of memory consistency models that significantly improve the performance of stricter consistency models, bringing the performance to within 10--15% of the performance of more relaxed models.The second part of our study focuses on the more challenging OLTP workload. We show that an instruction stream buffer is effective in reducing the remaining instruction stalls in OLTP, providing a 17% reduction in execution time (approaching a perfect instruction cache to within 15%). Furthermore, our characterization shows that a large fraction of the data communication misses in OLTP exhibit migratory behavior; our preliminary results show that software prefetch and writeback/flush hints can be used for this data to further reduce execution time by 12%.",
"title": ""
},
{
"docid": "neg:1840173_19",
"text": "Event cameras, such as dynamic vision sensors (DVS), and dynamic and activepixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events. The APS stream is a sequence of standard grayscale global-shutter image sensor frames. The DVS events represent brightness changes occurring at a particular moment, with a jitter of about a millisecond under most lighting conditions. They have a dynamic range of >120 dB and effective frame rates >1 kHz at data rates comparable to 30 fps (frames/second) image sensors. To overcome some of the limitations of current image acquisition technology, we investigate in this work the use of the combined DVS and APS streams in endto-end driving applications. The dataset DDD17 accompanying this paper is the first open dataset of annotated DAVIS driving recordings. DDD17 has over 12 h of a 346x260 pixel DAVIS sensor recording highway and city driving in daytime, evening, night, dry and wet weather conditions, along with vehicle speed, GPS position, driver steering, throttle, and brake captured from the car’s on-board diagnostics interface. As an example application, we performed a preliminary end-toend learning study of using a convolutional neural network that is trained to predict the instantaneous steering angle from DVS and APS visual data.",
"title": ""
}
] |
1840174 | Why and how Java developers break APIs | [
{
"docid": "pos:1840174_0",
"text": "When APIs evolve, clients make corresponding changes to their applications to utilize new or updated APIs. Despite the benefits of new or updated APIs, developers are often slow to adopt the new APIs. As a first step toward understanding the impact of API evolution on software ecosystems, we conduct an in-depth case study of the co-evolution behavior of Android API and dependent applications using the version history data found in github. Our study confirms that Android is evolving fast at a rate of 115 API updates per month on average. Client adoption, however, is not catching up with the pace of API evolution. About 28% of API references in client applications are outdated with a median lagging time of 16 months. 22% of outdated API usages eventually upgrade to use newer API versions, but the propagation time is about 14 months, much slower than the average API release interval (3 months). Fast evolving APIs are used more by clients than slow evolving APIs but the average time taken to adopt new versions is longer for fast evolving APIs. Further, API usage adaptation code is more defect prone than the one without API usage adaptation. This may indicate that developers avoid API instability.",
"title": ""
}
] | [
{
"docid": "neg:1840174_0",
"text": "Speech processing is emerged as one of the important application area of digital signal processing. Various fields for research in speech processing are speech recognition, speaker recognition, speech synthesis, speech coding etc. The objective of automatic speaker recognition is to extract, characterize and recognize the information about speaker identity. Feature extraction is the first step for speaker recognition. Many algorithms are suggested/developed by the researchers for feature extraction. In this work, the Mel Frequency Cepstrum Coefficient (MFCC) feature has been used for designing a text dependent speaker identification system. Some modifications to the existing technique of MFCC for feature extraction are also suggested to improve the speaker recognition efficiency.",
"title": ""
},
{
"docid": "neg:1840174_1",
"text": "Extraction of the lower third molar is one of the most common procedures performed in oral surgery. In general, impacted tooth extraction involves sectioning the tooth’s crown and roots. In order to divide the impacted tooth so that it can be extracted, high-speed air turbine drills are frequently used. However, complications related to air turbine drills may occur. In this report, we propose an alternative tooth sectioning method that obviates the need for air turbine drill use by using a low-speed straight handpiece and carbide bur. A 21-year-old female patient presented to the institute’s dental hospital complaining of symptoms localized to the left lower third molar tooth that were suggestive of impaction. After physical examination, tooth extraction of the impacted left lower third molar was proposed and the patient consented to the procedure. The crown was divided using a conventional straight low-speed handpiece and carbide bur. This carbide bur can easily cut through the enamel of crown. On post-operative day number five, suture was removed and the wound was extremely clear. This technique could minimise intra-operative time and reduce the morbidity associated with air turbine drill assisted lower third molar extraction.",
"title": ""
},
{
"docid": "neg:1840174_2",
"text": "Ethernet is considered as a future communication standard for distributed embedded systems in the automotive and industrial domains. A key challenge is the deterministic low-latency transport of Ethernet frames, as many safety-critical real-time applications in these domains have tight timing requirements. Time-sensitive networking (TSN) is an upcoming set of Ethernet standards, which (among other things) address these requirements by specifying new quality of service mechanisms in the form of different traffic shapers. In this paper, we consider TSN's time-aware and peristaltic shapers and evaluate whether these shapers are able to fulfill these strict timing requirements. We present a formal timing analysis, which is a key requirement for the adoption of Ethernet in safety-critical real-time systems, to derive worst-case latency bounds for each shaper. We use a realistic automotive Ethernet setup to compare these shapers to each other and against Ethernet following IEEE 802.1Q.",
"title": ""
},
{
"docid": "neg:1840174_3",
"text": "We present online learning techniques for statistical machine translation (SMT). The availability of large training data sets that grow constantly over time is becoming more and more frequent in the field of SMT—for example, in the context of translation agencies or the daily translation of government proceedings. When new knowledge is to be incorporated in the SMT models, the use of batch learning techniques require very time-consuming estimation processes over the whole training set that may take days or weeks to be executed. By means of the application of online learning, new training samples can be processed individually in real time. For this purpose, we define a state-of-the-art SMT model composed of a set of submodels, as well as a set of incremental update rules for each of these submodels. To test our techniques, we have studied two well-known SMT applications that can be used in translation agencies: post-editing and interactive machine translation. In both scenarios, the SMT system collaborates with the user to generate high-quality translations. These user-validated translations can be used to extend the SMT models by means of online learning. Empirical results in the two scenarios under consideration show the great impact of frequent updates in the system performance. The time cost of such updates was also measured, comparing the efficiency of a batch learning SMT system with that of an online learning system, showing that online learning is able to work in real time whereas the time cost of batch retraining soon becomes infeasible. Empirical results also showed that the performance of online learning is comparable to that of batch learning. Moreover, the proposed techniques were able to learn from previously estimated models or from scratch. We also propose two new measures to predict the effectiveness of online learning in SMT tasks. The translation system with online learning capabilities presented here is implemented in the open-source Thot toolkit for SMT.",
"title": ""
},
{
"docid": "neg:1840174_4",
"text": "On shared-memory systems, Cilk-style work-stealing has been used to effectively parallelize irregular task-graph based applications such as Unbalanced Tree Search (UTS). There are two main difficulties in extending this approach to distributed memory. In the shared memory approach, thieves (nodes without work) constantly attempt to asynchronously steal work from randomly chosen victims until they find work. In distributed memory, thieves cannot autonomously steal work from a victim without disrupting its execution. When work is sparse, this results in performance degradation. In essence, a direct extension of traditional work-stealing to distributed memory violates the work-first principle underlying work-stealing. Further, thieves spend useless CPU cycles attacking victims that have no work, resulting in system inefficiencies in multi-programmed contexts. Second, it is non-trivial to detect active distributed termination (detect that programs at all nodes are looking for work, hence there is no work). This problem is well-studied and requires careful design for good performance. Unfortunately, in most existing languages/frameworks, application developers are forced to implement their own distributed termination detection.\n In this paper, we develop a simple set of ideas that allow work-stealing to be efficiently extended to distributed memory. First, we introduce lifeline graphs: low-degree, low-diameter, fully connected directed graphs. Such graphs can be constructed from k-dimensional hypercubes. When a node is unable to find work after w unsuccessful steals, it quiesces after informing the outgoing edges in its lifeline graph. Quiescent nodes do not disturb other nodes. A quiesced node is reactivated when work arrives from a lifeline and itself shares this work with those of its incoming lifelines that are activated. Termination occurs precisely when computation at all nodes has quiesced. In a language such as X10, such passive distributed termination can be detected automatically using the finish construct -- no application code is necessary.\n Our design is implemented in a few hundred lines of X10. On the binomial tree described in olivier:08}, the program achieve 87% efficiency on an Infiniband cluster of 1024 Power7 cores, with a peak throughput of 2.37 GNodes/sec. It achieves 87% efficiency on a Blue Gene/P with 2048 processors, and a peak throughput of 0.966 GNodes/s. All numbers are relative to single core sequential performance. This implementation has been refactored into a reusable global load balancing framework. Applications can use this framework to obtain global load balance with minimal code changes.\n In summary, we claim: (a) the first formulation of UTS that does not involve application level global termination detection, (b) the introduction of lifeline graphs to reduce failed steals (c) the demonstration of simple lifeline graphs based on k-hypercubes, (d) performance with superior efficiency (or the same efficiency but over a wider range) than published results on UTS. In particular, our framework can deliver the same or better performance as an unrestricted random work-stealing implementation, while reducing the number of attempted steals.",
"title": ""
},
{
"docid": "neg:1840174_5",
"text": "Internal organs are hidden and untouchable, making it difficult for children to learn their size, position, and function. Traditionally, human anatomy (body form) and physiology (body function) are taught using techniques ranging from worksheets to three-dimensional models. We present a new approach called BodyVis, an e-textile shirt that combines biometric sensing and wearable visualizations to reveal otherwise invisible body parts and functions. We describe our 15-month iterative design process including lessons learned through the development of three prototypes using participatory design and two evaluations of the final prototype: a design probe interview with seven elementary school teachers and three single-session deployments in after-school programs. Our findings have implications for the growing area of wearables and tangibles for learning.",
"title": ""
},
{
"docid": "neg:1840174_6",
"text": "Advances in mobile robotics have enabled robots that can autonomously operate in human-populated environments. Although primary tasks for such robots might be fetching, delivery, or escorting, they present an untapped potential as information gathering agents that can answer questions for the community of co-inhabitants. In this paper, we seek to better understand requirements for such information gathering robots (InfoBots) from the perspective of the user requesting the information. We present findings from two studies: (i) a user survey conducted in two office buildings and (ii) a 4-day long deployment in one of the buildings, during which inhabitants of the building could ask questions to an InfoBot through a web-based interface. These studies allow us to characterize the types of information that InfoBots can provide for their users.",
"title": ""
},
{
"docid": "neg:1840174_7",
"text": "The feedback dynamics from mosquito to human and back to mosquito involve considerable time delays due to the incubation periods of the parasites. In this paper, taking explicit account of the incubation periods of parasites within the human and the mosquito, we first propose a delayed Ross-Macdonald model. Then we calculate the basic reproduction number R0 and carry out some sensitivity analysis of R0 on the incubation periods, that is, to study the effect of time delays on the basic reproduction number. It is shown that the basic reproduction number is a decreasing function of both time delays. Thus, prolonging the incubation periods in either humans or mosquitos (via medicine or control measures) could reduce the prevalence of infection.",
"title": ""
},
{
"docid": "neg:1840174_8",
"text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1",
"title": ""
},
{
"docid": "neg:1840174_9",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
},
{
"docid": "neg:1840174_10",
"text": "We consider the problem of optimal control in continuous and partially observable environments when the parameters of the model are not known exactly. Partially observable Markov decision processes (POMDPs) provide a rich mathematical model to handle such environments but require a known model to be solved by most approaches. This is a limitation in practice as the exact model parameters are often difficult to specify exactly. We adopt a Bayesian approach where a posterior distribution over the model parameters is maintained and updated through experience with the environment. We propose a particle filter algorithm to maintain the posterior distribution and an online planning algorithm, based on trajectory sampling, to plan the best action to perform under the current posterior. The resulting approach selects control actions which optimally trade-off between 1) exploring the environment to learn the model, 2) identifying the system's state, and 3) exploiting its knowledge in order to maximize long-term rewards. Our preliminary results on a simulated robot navigation problem show that our approach is able to learn good models of the sensors and actuators, and performs as well as if it had the true model.",
"title": ""
},
{
"docid": "neg:1840174_11",
"text": "We propose methods to classify lines of military chat, or posts, which contain items of interest. We evaluated several current text categorization and feature selection methodologies on chat posts. Our chat posts are examples of 'micro-text', or text that is generally very short in length, semi-structured, and characterized by unstructured or informal grammar and language. Although this study focused specifically on tactical updates via chat, we believe the findings are applicable to content of a similar linguistic structure. Completion of this milestone is a significant first step in allowing for more complex categorization and information extraction.",
"title": ""
},
{
"docid": "neg:1840174_12",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "neg:1840174_13",
"text": "This paper presents an in-depth study of young Swedish consumers and their impulsive online buying behaviour for clothing. The aim of the study is to develop the understanding of what factors affect impulse buying of clothing online and what feelings emerge when buying online. The study carried out was exploratory in nature, aiming to develop an understanding of impulse buying behaviour online before, under and after the actual purchase. The empirical data was collected through personal interviews. In the study, a pattern of the consumers recurrent feelings are identified through the impulse buying process; escapism, pleasure, reward, scarcity, security and anticipation. The escapism is particularly occurring since the study revealed that the consumers often carried out impulse purchases when they initially were bored, as opposed to previous studies. 1 University of Borås, Swedish Institute for Innovative Retailing, School of Business and IT, Allégatan 1, S-501 90 Borås, Sweden. Phone: +46732305934 Mail: malin.sundstrom@hb.se",
"title": ""
},
{
"docid": "neg:1840174_14",
"text": "Working papers are in draft form. This working paper is distributed for purposes of comment and discussion only. It may not be reproduced without permission of the copyright holder. Copies of working papers are available from the author. Please do not copy or distribute without explicit permission of the authors. Abstract Customer defection or churn is a widespread phenomenon that threatens firms across a variety of industries with dramatic financial consequences. To tackle this problem, companies are developing sophisticated churn management strategies. These strategies typically involve two steps – ranking customers based on their estimated propensity to churn, and then offering retention incentives to a subset of customers at the top of the churn ranking. The implicit assumption is that this process would maximize firm's profits by targeting customers who are most likely to churn. However, current marketing research and practice aims at maximizing the correct classification of churners and non-churners. Profit from targeting a customer depends on not only a customer's propensity to churn, but also on her spend or value, her probability of responding to retention offers, as well as the cost of these offers. Overall profit of the firm also depends on the number of customers the firm decides to target for its retention campaign. We propose a predictive model that accounts for all these elements. Our optimization algorithm uses stochastic gradient boosting, a state-of-the-art numerical algorithm based on stage-wise gradient descent. It also determines the optimal number of customers to target. The resulting optimal customer ranking and target size selection leads to, on average, a 115% improvement in profit compared to current methods. Remarkably, the improvement in profit comes along with more prediction errors in terms of which customers will churn. However, the new loss function leads to better predictions where it matters the most for the company's profits. For a company like Verizon Wireless, this translates into a profit increase of at least $28 million from a single retention campaign, without any additional implementation cost.",
"title": ""
},
{
"docid": "neg:1840174_15",
"text": "We present a technique for automatic placement of authorization hooks, and apply it to the Linux security modules (LSM) framework. LSM is a generic framework which allows diverse authorization policies to be enforced by the Linux kernel. It consists of a kernel module which encapsulates an authorization policy, and hooks into the kernel module placed at appropriate locations in the Linux kernel. The kernel enforces the authorization policy using hook calls. In current practice, hooks are placed manually in the kernel. This approach is tedious, and as prior work has shown, is prone to security holes.Our technique uses static analysis of the Linux kernel and the kernel module to automate hook placement. Given a non-hook-placed version of the Linux kernel, and a kernel module that implements an authorization policy, our technique infers the set of operations authorized by each hook, and the set of operations performed by each function in the kernel. It uses this information to infer the set of hooks that must guard each kernel function. We describe the design and implementation of a prototype tool called TAHOE (Tool for Authorization Hook Placement) that uses this technique. We demonstrate the effectiveness of TAHOE by using it with the LSM implementation of security-enhanced Linux (selinux). While our exposition in this paper focuses on hook placement for LSM, our technique can be used to place hooks in other LSM-like architectures as well.",
"title": ""
},
{
"docid": "neg:1840174_16",
"text": "The traditional approach towards human identification such as fingerprints, identity cards, iris recognition etc. lead to the improvised technique for face recognition. This includes enhancement and segmentation of face image, detection of face boundary and facial features, matching of extracted features against the features in a database, and finally recognition of the face. This research proposes a wavelet transformation for preprocessing the face image, extracting edge image, extracting features and finally matching extracted facial features for face recognition. Simulation is done using ORL database that contains PGM images. This research finds application in homeland security where it can increase the robustness of the existing face recognition algorithms.",
"title": ""
},
{
"docid": "neg:1840174_17",
"text": "Digital advertisements are delivered in the form of static images, animations or videos, with the goal to promote a product, a service or an idea to desktop or mobile users. Thus, the advertiser pays a monetary cost to buy ad-space in a content provider’s medium (e.g., website) to place their advertisement in the consumer’s display. However, is it only the advertiser who pays for the ad delivery? Unlike traditional advertisements in mediums such as newspapers, TV or radio, in the digital world, the end-users are also paying a cost for the advertisement delivery. Whilst the cost on the advertiser’s side is clearly monetary, on the end-user, it includes both quantifiable costs, such as network requests and transferred bytes, and qualitative costs such as privacy loss to the ad ecosystem. In this study, we aim to increase user awareness regarding the hidden costs of digital advertisement in mobile devices, and compare the user and advertiser views. Specifically, we built OpenDAMP, a transparency tool that passively analyzes users’ web traffic and estimates the costs in both sides. We use a year-long dataset of 1270 real mobile users and by juxtaposing the costs of both sides, we identify a clear imbalance: the advertisers pay several times less to deliver ads, than the cost paid by the users to download them. In addition, the majority of users experience a significant privacy loss, through the personalized ad delivery mechanics.",
"title": ""
},
{
"docid": "neg:1840174_18",
"text": "Clock gating is an effective technique for minimizing dynamic power in sequential circuits. Applying clock-gating at gate-level not only saves time compared to implementing clock-gating in the RTL code but also saves power and can easily be automated in the synthesis process. This paper presents simulation results on various types of clock-gating at different hierarchical levels on a serial peripheral interface (SPI) design. In general power savings of about 30% and 36% reduction on toggle rate can be seen with different complex clock- gating methods with respect to no clock-gating in the design.",
"title": ""
},
{
"docid": "neg:1840174_19",
"text": "We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.",
"title": ""
}
] |
1840175 | Business Process Modeling: Current Issues and Future Challenges | [
{
"docid": "pos:1840175_0",
"text": "This paper examines cognitive beliefs and affect influencing ones intention to continue using (continuance) information systems (IS). Expectationconfirmation theory is adapted from the consumer behavior literature and integrated with theoretical and empirical findings from prior IS usage research to theorize a model of IS continuance. Five research hypotheses derived from this model are empirically validated using a field survey of online banking users. The results suggest that users continuance intention is determined by their satisfaction with IS use and perceived usefulness of continued IS use. User satisfaction, in turn, is influenced by their confirmation of expectation from prior IS use and perceived usefulness. Postacceptance perceived usefulness is influenced by Ron Weber was the accepting senior editor for this paper. users confirmation level. This study draws attention to the substantive differences between acceptance and continuance behaviors, theorizes and validates one of the earliest theoretical models of IS continuance, integrates confirmation and user satisfaction constructs within our current understanding of IS use, conceptualizes and creates an initial scale for measuring IS continuance, and offers an initial explanation for the acceptancediscontinuance anomaly.",
"title": ""
},
{
"docid": "pos:1840175_1",
"text": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature.",
"title": ""
}
] | [
{
"docid": "neg:1840175_0",
"text": "Artificial bee colony (ABC) is the one of the newest nature inspired heuristics for optimization problem. Like the chaos in real bee colony behavior, this paper proposes new ABC algorithms that use chaotic maps for parameter adaptation in order to improve the convergence characteristics and to prevent the ABC to get stuck on local solutions. This has been done by using of chaotic number generators each time a random number is needed by the classical ABC algorithm. Seven new chaotic ABC algorithms have been proposed and different chaotic maps have been analyzed in the benchmark functions. It has been detected that coupling emergent results in different areas, like those of ABC and complex dynamics, can improve the quality of results in some optimization problems. It has been also shown that, the proposed methods have somewhat increased the solution quality, that is in some cases they improved the global searching capability by escaping the local solutions. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840175_1",
"text": "A huge number of academic papers are coming out from a lot of conferences and journals these days. In these circumstances, most researchers rely on key-based search or browsing through proceedings of top conferences and journals to find their related work. To ease this difficulty, we propose a Personalized Academic Research Paper Recommendation System, which recommends related articles, for each researcher, that may be interesting to her/him. In this paper, we first introduce our web crawler to retrieve research papers from the web. Then, we define similarity between two research papers based on the text similarity between them. Finally, we propose our recommender system developed using collaborative filtering methods. Our evaluation results demonstrate that our system recommends good quality research papers.",
"title": ""
},
{
"docid": "neg:1840175_2",
"text": "The smart grid is an electronically controlled electrical grid that connects power generation, transmission, distribution, and consumers using information communication technologies. One of the key characteristics of the smart grid is its support for bi-directional information flow between the consumer of electricity and the utility provider. This two-way interaction allows electricity to be generated in real-time based on consumers’ demands and power requests. As a result, consumer privacy becomes an important concern when collecting energy usage data with the deployment and adoption of smart grid technologies. To protect such sensitive information it is imperative that privacy protection mechanisms be used to protect the privacy of smart grid users. We present an analysis of recently proposed smart grid privacy solutions and identify their strengths and weaknesses in terms of their implementation complexity, efficiency, robustness, and simplicity.",
"title": ""
},
{
"docid": "neg:1840175_3",
"text": "Data provenance is essential for debugging query results, auditing data in cloud environments, and explaining outputs of Big Data analytics. A well-established technique is to represent provenance as annotations on data and to instrument queries to propagate these annotations to produce results annotated with provenance. However, even sophisticated optimizers are often incapable of producing efficient execution plans for instrumented queries, because of their inherent complexity and unusual structure. Thus, while instrumentation enables provenance support for databases without requiring any modification to the DBMS, the performance of this approach is far from optimal. In this work, we develop provenancespecific optimizations to address this problem. Specifically, we introduce algebraic equivalences targeted at instrumented queries and discuss alternative, equivalent ways of instrumenting a query for provenance capture. Furthermore, we present an extensible heuristic and cost-based optimization (CBO) framework that governs the application of these optimizations and implement this framework in our GProM provenance system. Our CBO is agnostic to the plan space shape, uses a DBMS for cost estimation, and enables retrofitting of optimization choices into existing code by adding a few LOC. Our experiments confirm that these optimizations are highly effective, often improving performance by several orders of magnitude for diverse provenance tasks.",
"title": ""
},
{
"docid": "neg:1840175_4",
"text": "Any bacterial population harbors a small number of phenotypic variants that survive exposure to high concentrations of antibiotic. Importantly, these so-called 'persister cells' compromise successful antibiotic therapy of bacterial infections and are thought to contribute to the development of antibiotic resistance. Intriguingly, drug-tolerant persisters have also been identified as a factor underlying failure of chemotherapy in tumor cell populations. Recent studies have begun to unravel the complex molecular mechanisms underlying persister formation and revolve around stress responses and toxin-antitoxin modules. Additionally, in vitro evolution experiments are revealing insights into the evolutionary and adaptive aspects of this phenotype. Furthermore, ever-improving experimental techniques are stimulating efforts to investigate persisters in their natural, infection-associated, in vivo environment. This review summarizes recent insights into the molecular mechanisms of persister formation, explains how persisters complicate antibiotic treatment of infections, and outlines emerging strategies to combat these tolerant cells.",
"title": ""
},
{
"docid": "neg:1840175_5",
"text": "With the advent of brain computer interfaces based on real-time fMRI (rtfMRI-BCI), the possibility of performing neurofeedback based on brain hemodynamics has become a reality. In the early stage of the development of this field, studies have focused on the volitional control of activity in circumscribed brain regions. However, based on the understanding that the brain functions by coordinated activity of spatially distributed regions, there have recently been further developments to incorporate real-time feedback of functional connectivity and spatio-temporal patterns of brain activity. The present article reviews the principles of rtfMRI neurofeedback, its applications, benefits and limitations. A special emphasis is given to the discussion of novel developments that have enabled the use of this methodology to achieve self-regulation of the functional connectivity between different brain areas and of distributed brain networks, anticipating new and exciting applications for cognitive neuroscience and for the potential alleviation of neuropsychiatric disorders.",
"title": ""
},
{
"docid": "neg:1840175_6",
"text": "Physical unclonable functions (PUFs) are security features that are based on process variations that occur during silicon chip fabrication. As PUFs are dependent on process variations, they need to be robust against reversible and irreversible temporal variabilities. In this paper, we present experimental results showing temporal variability in 4, 5, and 7-stage ring oscillator PUFs (ROPUFs). The reversible temporal variabilities are studied based on voltage and temperature variations, and the irreversible temporal variabilities are studied based on accelerated aging. Our results show that ROPUFs are sensitive to temperature and voltage variations regardless of the number of RO stages used. It is also observed that the aging, temperature, and voltage variation effects are observed to be uniformly distributed throughout the chip. This is evidenced by noting uniform changes in the RO frequency. Our results also show that most of the bit flips occur when the frequency difference in the RO pairs is low. This leads us to the conclusion that RO comparison pairs that pass high frequency threshold should be filtered to reduce temporal variabilities effect on the ROPUF. The experimental results also show that the 3-stage ROPUF has the lowest percentage of bit flip occurrences and the highest number of RO comparison pairs that pass high frequency threshold.",
"title": ""
},
{
"docid": "neg:1840175_7",
"text": "Capacitive rotary encoders are widely used in motor velocity and angular position control, where high-speed and high-precision angle calculation is required. This paper illustrates implementation of arctangent operation, based on the CORDIC (an acronym for COordinate Rotational DIgital Computer) algorithm, in the capacitive rotary encoder signal demodulation in an FPGA to obtain the motor velocity and position. By skipping some unnecessary rotation in CORDIC algorithm, we improve the algorithm's computing accuracy. Experiments show that the residue angle error is almost reduced by half after the CORDIC algorithm is optimized, and is completely meet the precision requirements of the system.",
"title": ""
},
{
"docid": "neg:1840175_8",
"text": "Routing in Vehicular Ad hoc Networks is a challenging task due to the unique characteristics of the network such as high mobility of nodes, dynamically changing topology and highly partitioned network. It is a challenge to ensure reliable, continuous and seamless communication in the presence of speeding vehicles. The performance of routing protocols depends on various internal factors such as mobility of nodes and external factors such as road topology and obstacles that block the signal. This demands a highly adaptive approach to deal with the dynamic scenarios by selecting the best routing and forwarding strategies and by using appropriate mobility and propagation models. In this paper we review the existing routing protocols for VANETs and categorise them into a taxonomy based on key attributes such as network architecture, applications supported, routing strategies, forwarding strategies, mobility models and quality of service metrics. Protocols belonging to unicast, multicast, geocast and broadcast categories are discussed. Strengths and weaknesses of various protocols using topology based, position based and cluster based approaches are analysed. Emphasis is given on the adaptive and context-aware routing protocols. Simulation of broadcast and unicast protocols is carried out and the results are presented.",
"title": ""
},
{
"docid": "neg:1840175_9",
"text": "Scientists have predicted that carbon’s immediate neighbors on the periodic chart, boron and nitrogen, may also form perfect nanotubes, since the advent of carbon nanotubes (CNTs) in 1991. First proposed then synthesized by researchers at UC Berkeley in the mid 1990’s, the boron nitride nanotube (BNNT) has proven very difficult to make until now. Herein we provide an update on a catalyst-free method for synthesizing highly crystalline, small diameter BNNTs with a high aspect ratio using a high power laser under a high pressure and high temperature environment first discovered jointly by NASA/NIA JSA. Progress in purification methods, dispersion studies, BNNT mat and composite formation, and modeling and diagnostics will also be presented. The white BNNTs offer extraordinary properties including neutron radiation shielding, piezoelectricity, thermal oxidative stability (> 800 ̊C in air), mechanical strength, and toughness. The characteristics of the novel BNNTs and BNNT polymer composites and their potential applications are discussed.",
"title": ""
},
{
"docid": "neg:1840175_10",
"text": "This study investigated the influence of the completeness of CRM relational information processes on customer-based relational performance and profit performance. In addition, interaction orientation and CRM readiness were adopted as moderators on the relationship between CRM relational information processes and customer-based performance. Both qualitative and quantitative approaches were applied in this study. The results revealed that the completeness of CRM relational information processes facilitates customer-based relational performance (i.e., customer satisfaction, and positive WOM), and in turn enhances profit performance (i.e., efficiency with regard to identifying, acquiring and retaining, and converting unprofitable customers to profitable ones). The alternative model demonstrated that both interaction orientation and CRM readiness play a mediating role in the relationship between information processes and relational performance. Managers should strengthen the completeness and smoothness of CRM information processes, should increase the level of interactional orientation with customers and should maintain firm CRM readiness to service their customers. The implications of this research and suggestions for managers were also discussed.",
"title": ""
},
{
"docid": "neg:1840175_11",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "neg:1840175_12",
"text": "Lesions of the orbital frontal lobe, particularly its medial sectors, are known to cause deficits in empathic ability, whereas the role of this region in theory of mind processing is the subject of some controversy. In a functional magnetic resonance imaging study with healthy participants, emotional perspective-taking was contrasted with cognitive perspective-taking in order to examine the role of the orbital frontal lobe in subcomponents of theory of mind processing. Subjects responded to a series of scenarios presented visually in three conditions: emotional perspective-taking, cognitive perspective-taking and a control condition that required inferential reasoning, but not perspective-taking. Group results demonstrated that the medial orbitofrontal lobe, defined as Brodmann's areas 11 and 25, was preferentially involved in emotional as compared to cognitive perspective-taking. This finding is both consistent with the lesion literature, and resolves the inconsistency of orbital frontal findings in the theory of mind literature.",
"title": ""
},
{
"docid": "neg:1840175_13",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "neg:1840175_14",
"text": "Table 2 – Results of the proposed method for different voting schemes and variants compared to a method from the literature Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of food items in meal images using a deep convolutional neural network (CNN) followed by a voting scheme. Our approach exploits the outstanding descriptive ability of a CNN, while the patch-wise model allows the generation of sufficient training samples, provides additional spatial flexibility for the recognition and ignores background pixels.",
"title": ""
},
{
"docid": "neg:1840175_15",
"text": "In this study, we will present the novel application of Type-2 (T2) fuzzy control into the popular video game called flappy bird. To the best of our knowledge, our work is the first deployment of the T2 fuzzy control into the computer games research area. We will propose a novel T2 fuzzified flappy bird control system that transforms the obstacle avoidance problem of the game logic into the reference tracking control problem. The presented T2 fuzzy control structure is composed of two important blocks which are the reference generator and Single Input Interval T2 Fuzzy Logic Controller (SIT2-FLC). The reference generator is the mechanism which uses the bird's position and the pipes' positions to generate an appropriate reference signal to be tracked. Thus, a conventional fuzzy feedback control system can be defined. The generated reference signal is tracked via the presented SIT2-FLC that can be easily tuned while also provides a certain degree of robustness to system. We will investigate the performance of the proposed T2 fuzzified flappy bird control system by providing comparative simulation results and also experimental results performed in the game environment. It will be shown that the proposed T2 fuzzified flappy bird control system results with a satisfactory performance both in the framework of fuzzy control and computer games. We believe that this first attempt of the employment of T2-FLCs in games will be an important step for a wider deployment of T2-FLCs in the research area of computer games.",
"title": ""
},
{
"docid": "neg:1840175_16",
"text": "This paper describes a method for maximum power point tracking (MPPT) control while searching for optimal parameters corresponding to weather conditions at that time. The conventional method has problems in that it is impossible to quickly acquire the generation power at the maximum power (MP) point in low solar radiation (irradiation) regions. It is found theoretically and experimentally that the maximum output power and the optimal current, which give this maximum, have a linear relation at a constant temperature. Furthermore, it is also shown that linearity exists between the short-circuit current and the optimal current. MPPT control rules are created based on the findings from solar arrays that can respond at high speeds to variations in irradiation. The proposed MPPT control method sets the output current track on the line that gives the relation between the MP and the optimal current so as to acquire the MP that can be generated at that time by dividing the power and current characteristics into two fields. The method is based on the generated power being a binary function of the output current. Considering the experimental fact that linearity is maintained only at low irradiation below half the maximum irradiation, the proportionality coefficient (voltage coefficient) is compensated for only in regions with more than half the rated optimal current, which correspond to the maximum irradiation. At high irradiation, the voltage coefficient needed to perform the proposed MPPT control is acquired through the hill-climbing method. The effectiveness of the proposed method is verified through experiments under various weather conditions",
"title": ""
},
{
"docid": "neg:1840175_17",
"text": "Dixon's method for computing multivariate resultants by simultaneously eliminating many variables is reviewed. The method is found to be quite restrictive because often the Dixon matrix is singular, and the Dixon resultant vanished identically yielding no information about solutions for many algebraic and geometry problems. We extend Dixon's method for the case when the Dixon matrix is singular, but satisfies a condition. An efficient algorithm is developed based on the proposed extension for extracting conditions for the existence of affine solutions of a finite set of polynomials. Using this algorithm, numerous geometric and algebraic identities are derived for examples which appear intractable with other techniques of triangulation such as the successive resultant method, the Gro¨bner basis method, Macaulay resultants and Characteristic set method. Experimental results suggest that the resultant of a set of polynomials which are symmetric in the variables is relatively easier to compute using the extended Dixon's method.",
"title": ""
},
{
"docid": "neg:1840175_18",
"text": "BACKGROUND\nLarge comparative studies that have evaluated long-term functional outcome of operatively treated ankle fractures are lacking. This study was performed to analyse the influence of several combinations of malleolar fractures on long-term functional outcome and development of osteoarthritis.\n\n\nMETHODS\nRetrospective cohort-study on operated (1995-2007) malleolar fractures. Results were assessed with use of the AAOS- and AOFAS-questionnaires, VAS-pain score, dorsiflexion restriction (range of motion) and osteoarthritis. Categorisation was determined using the number of malleoli involved.\n\n\nRESULTS\n243 participants with a mean follow-up of 9.6 years were included. Significant differences for all outcomes were found between unimalleolar (isolated fibular) and bimalleolar (a combination of fibular and medial) fractures (AOFAS 97 vs 91, p = 0.035; AAOS 97 vs 90, p = 0.026; dorsiflexion restriction 2.8° vs 6.7°, p = 0.003). Outcomes after fibular fractures with an additional posterior fragment were similar to isolated fibular fractures. However, significant differences were found between unimalleolar and trimalleolar (a combination of lateral, medial and posterior) fractures (AOFAS 97 vs 88, p < 0.001; AAOS 97 vs 90, p = 0.003; VAS-pain 1.1 vs 2.3 p < 0.001; dorsiflexion restriction 2.9° vs 6.9°, p < 0.001). There was no significant difference in isolated fibular fractures with or without additional deltoid ligament injury. In addition, no functional differences were found between bimalleolar and trimalleolar fractures. Surprisingly, poor outcomes were found for isolated medial malleolar fractures. Development of osteoarthritis occurred mainly in trimalleolar fractures with a posterior fragment larger than 5 %.\n\n\nCONCLUSIONS\nThe results of our study show that long-term functional outcome is strongly associated to medial malleolar fractures, isolated or as part of bi- or trimalleolar fractures. More cases of osteoarthritis are found in trimalleolar fractures.",
"title": ""
},
{
"docid": "neg:1840175_19",
"text": "A novel model for asymmetric multiagent reinforcement learning is introduced in this paper. The model addresses the problem where the information states of the agents involved in the learning task are not equal; some agents (leaders) have information how their opponents (followers) will select their actions and based on this information leaders encourage followers to select actions that lead to improved payoffs for the leaders. This kind of configuration arises e.g. in semi-centralized multiagent systems with an external global utility associated to the system. We present a brief literature survey of multiagent reinforcement learning based on Markov games and then propose an asymmetric learning model that utilizes the theory of Markov games. Additionally, we construct a practical learning method based on the proposed learning model and study its convergence properties. Finally, we test our model with a simple example problem and a larger two-layer pricing application.",
"title": ""
}
] |
1840176 | Spectral and Energy-Efficient Wireless Powered IoT Networks: NOMA or TDMA? | [
{
"docid": "pos:1840176_0",
"text": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT.",
"title": ""
},
{
"docid": "pos:1840176_1",
"text": "In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.",
"title": ""
}
] | [
{
"docid": "neg:1840176_0",
"text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.",
"title": ""
},
{
"docid": "neg:1840176_1",
"text": "Developing segmentation techniques for overlapping cells has become a major hurdle for automated analysis of cervical cells. In this paper, an automated three-stage segmentation approach to segment the nucleus and cytoplasm of each overlapping cell is described. First, superpixel clustering is conducted to segment the image into small coherent clusters that are used to generate a refined superpixel map. The refined superpixel map is passed to an adaptive thresholding step to initially segment the image into cellular clumps and background. Second, a linear classifier with superpixel-based features is designed to finalize the separation between nuclei and cytoplasm. Finally, edge and region based cell segmentation are performed based on edge enhancement process, gradient thresholding, morphological operations, and region properties evaluation on all detected nuclei and cytoplasm pairs. The proposed framework has been evaluated using the ISBI 2014 challenge dataset. The dataset consists of 45 synthetic cell images, yielding 270 cells in total. Compared with the state-of-the-art approaches, our approach provides more accurate nuclei boundaries, as well as successfully segments most of overlapping cells.",
"title": ""
},
{
"docid": "neg:1840176_2",
"text": "Disaster management is a crucial and urgent research issue. Emergency communication networks (ECNs) provide fundamental functions for disaster management, because communication service is generally unavailable due to large-scale damage and restrictions in communication services. Considering the features of a disaster (e.g., limited resources and dynamic changing of environment), it is always a key problem to use limited resources effectively to provide the best communication services. Big data analytics in the disaster area provides possible solutions to understand the situations happening in disaster areas, so that limited resources can be optimally deployed based on the analysis results. In this paper, we survey existing ECNs and big data analytics from both the content and the spatial points of view. From the content point of view, we survey existing data mining and analysis techniques, and further survey and analyze applications and the possibilities to enhance ECNs. From the spatial point of view, we survey and discuss the most popular methods and further discuss the possibility to enhance ECNs. Finally, we highlight the remaining challenging problems after a systematic survey and studies of the possibilities.",
"title": ""
},
{
"docid": "neg:1840176_3",
"text": "Zero shot learning in Image Classification refers to the setting where images from some novel classes are absent in the training data but other information such as natural language descriptions or attribute vectors of the classes are available. This setting is important in the real world since one may not be able to obtain images of all the possible classes at training. While previous approaches have tried to model the relationship between the class attribute space and the image space via some kind of a transfer function in order to model the image space correspondingly to an unseen class, we take a different approach and try to generate the samples from the given attributes, using a conditional variational autoencoder, and use the generated samples for classification of the unseen classes. By extensive testing on four benchmark datasets, we show that our model outperforms the state of the art, particularly in the more realistic generalized setting, where the training classes can also appear at the test time along with the novel classes.",
"title": ""
},
{
"docid": "neg:1840176_4",
"text": "In this paper we present a voice command and mouth gesture based robot command interface which is capable of controlling three degrees of freedom. The gesture set was designed in order to avoid head rotation and translation, and thus relying solely in mouth movements. Mouth segmentation is performed by using the normalized a* component, as in [1]. The gesture detection process is carried out by a Gaussian Mixture Model (GMM) based classifier. After that, a state machine stabilizes the system response by restricting the number of possible movements depending on the initial state. Voice commands are modeled using a Hidden Markov Model (HMM) isolated word recognition scheme. The interface was designed taking into account the specific pose restrictions found in the DaVinci Assisted Surgery command console.",
"title": ""
},
{
"docid": "neg:1840176_5",
"text": "Large-scale classification is an increasingly critical Big Data problem. So far, however, very little has been published on how this is done in practice. In this paper we describe Chimera, our solution to classify tens of millions of products into 5000+ product types at WalmartLabs. We show that at this scale, many conventional assumptions regarding learning and crowdsourcing break down, and that existing solutions cease to work. We describe how Chimera employs a combination of learning, rules (created by in-house analysts), and crowdsourcing to achieve accurate, continuously improving, and cost-effective classification. We discuss a set of lessons learned for other similar Big Data systems. In particular, we argue that at large scales crowdsourcing is critical, but must be used in combination with learning, rules, and in-house analysts. We also argue that using rules (in conjunction with learning) is a must, and that more research attention should be paid to helping analysts create and manage (tens of thousands of) rules more effectively.",
"title": ""
},
{
"docid": "neg:1840176_6",
"text": "Herein we present a novel big-data framework for healthcare applications. Healthcare data is well suited for bigdata processing and analytics because of the variety, veracity and volume of these types of data. In recent times, many areas within healthcare have been identified that can directly benefit from such treatment. However, setting up these types of architecture is not trivial. We present a novel approach of building a big-data framework that can be adapted to various healthcare applications with relative use, making this a one-stop “Big-Data-Healthcare-in-a-Box”.",
"title": ""
},
{
"docid": "neg:1840176_7",
"text": "A variety of real-life mobile sensing applications are becoming available, especially in the life-logging, fitness tracking and health monitoring domains. These applications use mobile sensors embedded in smart phones to recognize human activities in order to get a better understanding of human behavior. While progress has been made, human activity recognition remains a challenging task. This is partly due to the broad range of human activities as well as the rich variation in how a given activity can be performed. Using features that clearly separate between activities is crucial. In this paper, we propose an approach to automatically extract discriminative features for activity recognition. Specifically, we develop a method based on Convolutional Neural Networks (CNN), which can capture local dependency and scale invariance of a signal as it has been shown in speech recognition and image recognition domains. In addition, a modified weight sharing technique, called partial weight sharing, is proposed and applied to accelerometer signals to get further improvements. The experimental results on three public datasets, Skoda (assembly line activities), Opportunity (activities in kitchen), Actitracker (jogging, walking, etc.), indicate that our novel CNN-based approach is practical and achieves higher accuracy than existing state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840176_8",
"text": "While data volumes continue to rise, the capacity of human attention remains limited. As a result, users need analytics engines that can assist in prioritizing attention in this fast data that is too large for manual inspection. We present a set of design principles for the design of fast data analytics engines that leverage the relative scarcity of human attention and overabundance of data: return fewer results, prioritize iterative analysis, and filter fast to compute less. We report on our early experiences employing these principles in the design and deployment of MacroBase, an open source analysis engine for prioritizing attention in fast data. By combining streaming operators for feature transformation, classification, and data summarization, MacroBase provides users with interpretable explanations of key behaviors, acting as a search engine for fast data.",
"title": ""
},
{
"docid": "neg:1840176_9",
"text": "We present a general approach to automating ethical decisions, drawing on machine learning and computational social choice. In a nutshell, we propose to learn a model of societal preferences, and, when faced with a specific ethical dilemma at runtime, efficiently aggregate those preferences to identify a desirable choice. We provide a concrete algorithm that instantiates our approach; some of its crucial steps are informed by a new theory of swap-dominance efficient voting rules. Finally, we implement and evaluate a system for ethical decision making in the autonomous vehicle domain, using preference data collected from 1.3 million people through the Moral Machine website.",
"title": ""
},
{
"docid": "neg:1840176_10",
"text": "As the primary barrier between an organism and its environment, epithelial cells are well-positioned to regulate tolerance while preserving immunity against pathogens. Class II major histocompatibility complex molecules (MHC class II) are highly expressed on the surface of epithelial cells (ECs) in both the lung and intestine, although the functional consequences of this expression are not fully understood. Here, we summarize current information regarding the interactions that regulate the expression of EC MHC class II in health and disease. We then evaluate the potential role of EC as non-professional antigen presenting cells. Finally, we explore future areas of study and the potential contribution of epithelial surfaces to gut-lung crosstalk.",
"title": ""
},
{
"docid": "neg:1840176_11",
"text": "This paper presents a tool developed for the purpose of assessing teaching presence in online courses that make use of computer conferencing, and preliminary results from the use of this tool. The method of analysis is based on Garrison, Anderson, and Archer’s [1] model of critical thinking and practical inquiry in a computer conferencing context. The concept of teaching presence is constitutively defined as having three categories – design and organization, facilitating discourse, and direct instruction. Indicators that we search for in the computer conference transcripts identify each category. Pilot testing of the instrument reveals interesting differences in the extent and type of teaching presence found in different graduate level online courses.",
"title": ""
},
{
"docid": "neg:1840176_12",
"text": "The classification problem of assigning several observations into different disjoint groups plays an important role in business decision making and many other areas. Developing more accurate and widely applicable classification models has significant implications in these areas. It is the reason that despite of the numerous classification models available, the research for improving the effectiveness of these models has never stopped. Combining several models or using hybrid models has become a common practice in order to overcome the deficiencies of single models and can be an effective way of improving upon their predictive performance, especially when the models in combination are quite different. In this paper, a novel hybridization of artificial neural networks (ANNs) is proposed using multiple linear regression models in order to yield more general and more accurate model than traditional artificial neural networks for solving classification problems. Empirical results indicate that the proposed hybrid model exhibits effectively improved classification accuracy in comparison with traditional artificial neural networks and also some other classification models such as linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), K-nearest neighbor (KNN), and support vector machines (SVMs) using benchmark and real-world application data sets. These data sets vary in the number of classes (two versus multiple) and the source of the data (synthetic versus real-world). Therefore, it can be applied as an appropriate alternate approach for solving classification problems, specifically when higher forecasting",
"title": ""
},
{
"docid": "neg:1840176_13",
"text": "Accurate face recognition is critical for many security applications. Current automatic face-recognition systems are defeated by natural changes in lighting and pose, which often affect face images more profoundly than changes in identity. The only system that can reliably cope with such variability is a human observer who is familiar with the faces concerned. We modeled human familiarity by using image averaging to derive stable face representations from naturally varying photographs. This simple procedure increased the accuracy of an industry standard face-recognition algorithm from 54% to 100%, bringing the robust performance of a familiar human to an automated system.",
"title": ""
},
{
"docid": "neg:1840176_14",
"text": "Objectives To evaluate the ability of a short-form FCE to predict future timely and sustained return-to-work. Methods A prospective cohort study was conducted using data collected during a cluster RCT. Subject performance on the items in the short-form FCE was compared to administrative recovery outcomes from a workers’ compensation database. Outcomes included days to claim closure, days to time loss benefit suspension and future recurrence (defined as re-opening a closed claim, restarting benefits, or filing a new claim for injury to the same body region). Analysis included multivariable Cox and logistic regression using a risk factor modeling strategy. Potential confounders included age, sex, injury duration, and job attachment status, among others. Results The sample included 147 compensation claimants with a variety of musculoskeletal injuries. Subjects who demonstrated job demand levels on all FCE items were more likely to have their claims closed (adjusted Hazard Ratio 5.52 (95% Confidence Interval 3.42–8.89), and benefits suspended (adjusted Hazard Ratio 5.45 (95% Confidence Interval 2.73–10.85) over the follow-up year. The proportion of variance explained by the FCE ranged from 18 to 27%. FCE performance was not significantly associated with future recurrence. Conclusion A short-form FCE appears to provide useful information for predicting time to recovery as measured through administrative outcomes, but not injury recurrence. The short-form FCE may be an efficient option for clinicians using FCE in the management of injured workers.",
"title": ""
},
{
"docid": "neg:1840176_15",
"text": "Based on the CSMC 0.6um 40V BCD process and the bandgap principle a reference circuit used in high voltage chip is designed. The simulation results show that a temperature coefficient of 26.5ppm/°C in the range of 3.5∼40V supply, the output voltage is insensitive to the power supply, when the supply voltage rages from 3.5∼40V, the output voltage is equal to 1.2558V to 1.2573V at room temperature. The circuit we designed has high precision and stability, thus it can be used as stability reference voltage in power management IC.",
"title": ""
},
{
"docid": "neg:1840176_16",
"text": "It is very difficult to over-emphasize the benefits of accurate data. Errors in data are generally the most expensive aspect of data entry, costing the users even much more compared to the original data entry. Unfortunately, these costs are intangibles or difficult to measure. If errors are detected at an early stage then it requires little cost to remove the errors. Incorrect and misleading data lead to all sorts of unpleasant and unnecessary expenses. Unluckily, it would be very expensive to correct the errors after the data has been processed, particularly when the processed data has been converted into the knowledge for decision making. No doubt a stitch in time saves nine i.e. a timely effort will prevent more work at later stage. Moreover, time spent in processing errors can also have a significant cost. One of the major problems with automated data entry systems are errors. In this paper we discuss many well known techniques to minimize errors, different cleansing approaches and, suggest how we can improve accuracy rate. Framework available for data cleansing offer the fundamental services such as attribute selection, formation of tokens, selection of clustering algorithms, selection of eliminator functions etc.",
"title": ""
},
{
"docid": "neg:1840176_17",
"text": "The development of ivermectin as a complementary vector control tool will require good quality evidence. This paper reviews the different eco-epidemiological contexts in which mass drug administration with ivermectin could be useful. Potential scenarios and pharmacological strategies are compared in order to help guide trial design. The rationale for a particular timing of an ivermectin-based tool and some potentially useful outcome measures are suggested.",
"title": ""
},
{
"docid": "neg:1840176_18",
"text": "Cadmium (Cd) is a toxic, nonessential transition metal and contributes a health risk to humans, including various cancers and cardiovascular diseases; however, underlying molecular mechanisms remain largely unknown. Cells transmit information to the next generation via two distinct ways: genetic and epigenetic. Chemical modifications to DNA or histone that alters the structure of chromatin without change of DNA nucleotide sequence are known as epigenetics. These heritable epigenetic changes include DNA methylation, post-translational modifications of histone tails (acetylation, methylation, phosphorylation, etc), and higher order packaging of DNA around nucleosomes. Apart from DNA methyltransferases, histone modification enzymes such as histone acetyltransferase, histone deacetylase, and methyltransferase, and microRNAs (miRNAs) all involve in these epigenetic changes. Recent studies indicate that Cd is able to induce various epigenetic changes in plant and mammalian cells in vitro and in vivo. Since aberrant epigenetics plays a critical role in the development of various cancers and chronic diseases, Cd may cause the above-mentioned pathogenic risks via epigenetic mechanisms. Here we review the in vitro and in vivo evidence of epigenetic effects of Cd. The available findings indicate that epigenetics occurred in association with Cd induction of malignant transformation of cells and pathological proliferation of tissues, suggesting that epigenetic effects may play a role in Cd toxic, particularly carcinogenic effects. The future of environmental epigenomic research on Cd should include the role of epigenetics in determining long-term and late-onset health effects following Cd exposure.",
"title": ""
}
] |
1840177 | A hybrid approach to fake news detection on social media | [
{
"docid": "pos:1840177_0",
"text": "Satire is an attractive subject in deception detection research: it is a type of deception that intentionally incorporates cues revealing its own deceptiveness. Whereas other types of fabrications aim to instill a false sense of truth in the reader, a successful satirical hoax must eventually be exposed as a jest. This paper provides a conceptual overview of satire and humor, elaborating and illustrating the unique features of satirical news, which mimics the format and style of journalistic reporting. Satirical news stories were carefully matched and examined in contrast with their legitimate news counterparts in 12 contemporary news topics in 4 domains (civics, science, business, and “soft” news). Building on previous work in satire detection, we proposed an SVMbased algorithm, enriched with 5 predictive features (Absurdity, Humor, Grammar, Negative Affect, and Punctuation) and tested their combinations on 360 news articles. Our best predicting feature combination (Absurdity, Grammar and Punctuation) detects satirical news with a 90% precision and 84% recall (F-score=87%). Our work in algorithmically identifying satirical news pieces can aid in minimizing the potential deceptive impact of satire.",
"title": ""
}
] | [
{
"docid": "neg:1840177_0",
"text": "A soft-start circuit with soft-recovery function for DC-DC converters is presented in this paper. The soft-start strategy is based on a linearly ramped-up reference and an error amplifier with minimum selector implemented with a three-limb differential pair skillfully. The soft-recovery strategy is based on a compact clamp circuit. The ramp voltage would be clamped once the feedback voltage is detected lower than a threshold, which could control the output to be recovered slowly and linearly. A monolithic DC-DC buck converter with proposed circuit has been fabricated with a 0.5μm CMOS process for validation. The measurement result shows that the ramp-based soft-start and soft-recovery circuit have good performance and agree well with the theoretical analysis.",
"title": ""
},
{
"docid": "neg:1840177_1",
"text": "We developed and evaluated a multimodal affect detector that combines conversational cues, gross body language, and facial features. The multimodal affect detector uses feature-level fusion to combine the sensory channels and linear discriminant analyses to discriminate between naturally occurring experiences of boredom, engagement/flow, confusion, frustration, delight, and neutral. Training and validation data for the affect detector were collected in a study where 28 learners completed a 32- min. tutorial session with AutoTutor, an intelligent tutoring system with conversational dialogue. Classification results supported a channel × judgment type interaction, where the face was the most diagnostic channel for spontaneous affect judgments (i.e., at any time in the tutorial session), while conversational cues were superior for fixed judgments (i.e., every 20 s in the session). The analyses also indicated that the accuracy of the multichannel model (face, dialogue, and posture) was statistically higher than the best single-channel model for the fixed but not spontaneous affect expressions. However, multichannel models reduced the discrepancy (i.e., variance in the precision of the different emotions) of the discriminant models for both judgment types. The results also indicated that the combination of channels yielded superadditive effects for some affective states, but additive, redundant, and inhibitory effects for others. We explore the structure of the multimodal linear discriminant models and discuss the implications of some of our major findings.",
"title": ""
},
{
"docid": "neg:1840177_2",
"text": "A huge repository of terabytes of data is generated each day from modern information systems and digital technologies such as Internet of Things and cloud computing. Analysis of these massive data requires a lot of efforts at multiple levels to extract knowledge for decision making. Therefore, big data analysis is a current area of research and development. The basic objective of this paper is to explore the potential impact of big data challenges, open research issues, and various tools associated with it. As a result, this article provides a platform to explore big data at numerous stages. Additionally, it opens a new horizon for researchers to develop the solution, based on the challenges and open research issues. Keywords—Big data analytics; Hadoop; Massive data; Structured data; Unstructured Data",
"title": ""
},
{
"docid": "neg:1840177_3",
"text": "Yann Le Cun AT&T Bell Labs Holmdel NJ 07733 We introduce a new approach for on-line recognition of handwritten words written in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution \"annotated images\" where each pixel contains information about trajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores. The entire system is globally trained to minimize word-level errors.",
"title": ""
},
{
"docid": "neg:1840177_4",
"text": "Financial time-series forecasting has long been a challenging problem because of the inherently noisy and stochastic nature of the market. In the high-frequency trading, forecasting for trading purposes is even a more challenging task, since an automated inference system is required to be both accurate and fast. In this paper, we propose a neural network layer architecture that incorporates the idea of bilinear projection as well as an attention mechanism that enables the layer to detect and focus on crucial temporal information. The resulting network is highly interpretable, given its ability to highlight the importance and contribution of each temporal instance, thus allowing further analysis on the time instances of interest. Our experiments in a large-scale limit order book data set show that a two-hidden-layer network utilizing our proposed layer outperforms by a large margin all existing state-of-the-art results coming from much deeper architectures while requiring far fewer computations.",
"title": ""
},
{
"docid": "neg:1840177_5",
"text": "OBJECTIVE\nTo evaluate the curing profile of bulk-fill resin-based composites (RBC) using micro-Raman spectroscopy (μRaman).\n\n\nMETHODS\nFour bulk-fill RBCs were compared to a conventional RBC. RBC blocks were light-cured using a polywave LED light-curing unit. The 24-h degree of conversion (DC) was mapped along a longitudinal cross-section using μRaman. Curing profiles were constructed and 'effective' (>90% of maximum DC) curing parameters were calculated. A statistical linear mixed effects model was constructed to analyze the relative effect of the different curing parameters.\n\n\nRESULTS\nCuring efficiency differed widely with the flowable bulk-fill RBCs presenting a significantly larger 'effective' curing area than the fibre-reinforced RBC, which on its turn revealed a significantly larger 'effective' curing area than the full-depth bulk-fill and conventional (control) RBC. A decrease in 'effective' curing depth within the light beam was found in the same order. Only the flowable bulk-fill RBCs were able to cure 'effectively' at a 4-mm depth for the whole specimen width (up to 4mm outside the light beam). All curing parameters were found to statistically influence the statistical model and thus the curing profile, except for the beam inhomogeneity (regarding the position of the 410-nm versus that of 470-nm LEDs) that did not significantly affect the model for all RBCs tested.\n\n\nCONCLUSIONS\nMost of the bulk-fill RBCs could be cured up to at least a 4-mm depth, thereby validating the respective manufacturer's recommendations.\n\n\nCLINICAL SIGNIFICANCE\nAccording to the curing profiles, the orientation and position of the light guide is less critical for the bulk-fill RBCs than for the conventional RBC.",
"title": ""
},
{
"docid": "neg:1840177_6",
"text": "It has been pointed out that about 30% of the traffic congestion is caused by vehicles cruising around their destination and looking for a place to park. Therefore, addressing the problems associated with parking in crowded urban areas is of great significance. One effective solution is providing guidance for the vehicles to be parked according to the occupancy status of each parking lot. However, the existing parking guidance schemes mainly rely on deploying sensors or RSUs in the parking lot, which would incur substantial capital overhead. To reduce the aforementioned cost, we propose IPARK, which taps into the unused resources (e.g., wireless device, rechargeable battery, and storage capability) offered by parked vehicles to perform parking guidance. In IPARK, the cluster formed by parked vehicles generates the parking lot map automatically, monitors the occupancy status of each parking space in real time, and provides assistance for vehicles searching for parking spaces. We propose an efficient architecture for IPARK and investigate the challenging issues in realizing parking guidance over this architecture. Finally, we investigate IPARK through realistic experiments and simulation. The numerical results obtained verify that our scheme achieves effective parking guidance in VANETs.",
"title": ""
},
{
"docid": "neg:1840177_7",
"text": "BACKGROUND\nUse of the Internet for health information continues to grow rapidly, but its impact on health care is unclear. Concerns include whether patients' access to large volumes of information will improve their health; whether the variable quality of the information will have a deleterious effect; the effect on health disparities; and whether the physician-patient relationship will be improved as patients become more equal partners, or be damaged if physicians have difficulty adjusting to a new role.\n\n\nMETHODS\nTelephone survey of nationally representative sample of the American public, with oversample of people in poor health.\n\n\nRESULTS\nOf the 3209 respondents, 31% had looked for health information on the Internet in the past 12 months, 16% had found health information relevant to themselves and 8% had taken information from the Internet to their physician. Looking for information on the Internet showed a strong digital divide; however, once information had been looked for, socioeconomic factors did not predict other outcomes. Most (71%) people who took information to the physician wanted the physician's opinion, rather than a specific intervention. The effect of taking information to the physician on the physician-patient relationship was likely to be positive as long as the physician had adequate communication skills, and did not appear challenged by the patient bringing in information.\n\n\nCONCLUSIONS\nFor health information on the Internet to achieve its potential as a force for equity and patient well-being, actions are required to overcome the digital divide; assist the public in developing searching and appraisal skills; and ensure physicians have adequate communication skills.",
"title": ""
},
{
"docid": "neg:1840177_8",
"text": "The spatial responses of many of the cells recorded in all layers of rodent medial entorhinal cortex (mEC) show mutually aligned grid patterns. Recent experimental findings have shown that grids can often be better described as elliptical rather than purely circular and that, beyond the mutual alignment of their grid axes, ellipses tend to also orient their long axis along preferred directions. Are grid alignment and ellipse orientation aspects of the same phenomenon? Does the grid alignment result from single-unit mechanisms or does it require network interactions? We address these issues by refining a single-unit adaptation model of grid formation, to describe specifically the spontaneous emergence of conjunctive grid-by-head-direction cells in layers III, V, and VI of mEC. We find that tight alignment can be produced by recurrent collateral interactions, but this requires head-direction (HD) modulation. Through a competitive learning process driven by spatial inputs, grid fields then form already aligned, and with randomly distributed spatial phases. In addition, we find that the self-organization process is influenced by any anisotropy in the behavior of the simulated rat. The common grid alignment often orients along preferred running directions (RDs), as induced in a square environment. When speed anisotropy is present in exploration behavior, the shape of individual grids is distorted toward an ellipsoid arrangement. Speed anisotropy orients the long ellipse axis along the fast direction. Speed anisotropy on its own also tends to align grids, even without collaterals, but the alignment is seen to be loose. Finally, the alignment of spatial grid fields in multiple environments shows that the network expresses the same set of grid fields across environments, modulo a coherent rotation and translation. Thus, an efficient metric encoding of space may emerge through spontaneous pattern formation at the single-unit level, but it is coherent, hence context-invariant, if aided by collateral interactions.",
"title": ""
},
{
"docid": "neg:1840177_9",
"text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.",
"title": ""
},
{
"docid": "neg:1840177_10",
"text": "Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations. We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring sourcecode. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.",
"title": ""
},
{
"docid": "neg:1840177_11",
"text": "The extraordinary electronic properties of graphene provided the main thrusts for the rapid advance of graphene electronics. In photonics, the gate-controllable electronic properties of graphene provide a route to efficiently manipulate the interaction of photons with graphene, which has recently sparked keen interest in graphene plasmonics. However, the electro-optic tuning capability of unpatterned graphene alone is still not strong enough for practical optoelectronic applications owing to its non-resonant Drude-like behaviour. Here, we demonstrate that substantial gate-induced persistent switching and linear modulation of terahertz waves can be achieved in a two-dimensional metamaterial, into which an atomically thin, gated two-dimensional graphene layer is integrated. The gate-controllable light-matter interaction in the graphene layer can be greatly enhanced by the strong resonances of the metamaterial. Although the thickness of the embedded single-layer graphene is more than six orders of magnitude smaller than the wavelength (<λ/1,000,000), the one-atom-thick layer, in conjunction with the metamaterial, can modulate both the amplitude of the transmitted wave by up to 47% and its phase by 32.2° at room temperature. More interestingly, the gate-controlled active graphene metamaterials show hysteretic behaviour in the transmission of terahertz waves, which is indicative of persistent photonic memory effects.",
"title": ""
},
{
"docid": "neg:1840177_12",
"text": "Deep neural networks have been proven powerful at processing perceptual data, such as images and audio. However for tabular data, tree-based models are more popular. A nice property of tree-based models is their natural interpretability. In this work, we present Deep Neural Decision Trees (DNDT) – tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree. Yet as it is also a neural network (NN), it can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. We evaluate DNDT on several tabular datasets, verify its efficacy, and investigate similarities and differences between DNDT and vanilla decision trees. Interestingly, DNDT self-prunes at both split and feature-level.",
"title": ""
},
{
"docid": "neg:1840177_13",
"text": "Wearable devices are used in various applications to collect information including step information, sleeping cycles, workout statistics, and health-related information. Due to the nature and richness of the data collected by such devices, it is important to ensure the security of the collected data. This paper presents a new lightweight authentication scheme suitable for wearable device deployment. The scheme allows a user to mutually authenticate his/her wearable device(s) and the mobile terminal (e.g., Android and iOS device) and establish a session key among these devices (worn and carried by the same user) for secure communication between the wearable device and the mobile terminal. The security of the proposed scheme is then demonstrated through the broadly accepted real-or-random model, as well as using the popular formal security verification tool, known as the Automated validation of Internet security protocols and applications. Finally, we present a comparative summary of the proposed scheme in terms of the overheads such as computation and communication costs, security and functionality features of the proposed scheme and related schemes, and also the evaluation findings from the NS2 simulation.",
"title": ""
},
{
"docid": "neg:1840177_14",
"text": "In this paper, we propose a free viewpoint image rendering method combined with filter based alpha matting for improving the image quality of image boundaries. When we synthesize a free viewpoint image, blur around object boundaries in an input image spills foreground/background color in the synthesized image. To generate smooth boundaries, alpha matting is a solution. In our method based on filtering, we make a boundary map from input images and depth maps, and then feather the map by using guided filter. In addition, we extend view synthesis method to deal the alpha channel. Experiment results show that the proposed method synthesizes 0.4 dB higher quality images than the conventional method without the matting. Also the proposed method synthesizes 0.2 dB higher quality images than the conventional method of robust matting. In addition, the computational cost of the proposed method is 100x faster than the conventional matting.",
"title": ""
},
{
"docid": "neg:1840177_15",
"text": "I. Introduction UNTIL 35 yr ago, most scientists did not take research on the pineal gland seriously. The decade beginning in 1956, however, provided several discoveries that laid the foundation for what has become a very active area of investigation. These important early observations included the findings that, 1), the physiological activity of the pineal is influenced by the photoperiodic environment (1–5); 2), the gland contains a substance, N-acetyl-5-methoxytryptamine or melatonin, which has obvious endocrine capabilities (6, 7); 3), the function of the reproductive system in photoperiodically dependent rodents is inextricably linked to the physiology of the pineal gland (5, 8, 9); 4), the sympathetic innervation to the pineal is required for the gland to maintain its biosynthetic and endocrine activities (10, 11); and 5), the pineal gland can be rapidly removed from rodents with minimal damage to adjacent neural structures using a specially designed trephine (12).",
"title": ""
},
{
"docid": "neg:1840177_16",
"text": "PHP is one of the most popular languages for server-side application development. The language is highly dynamic, providing programmers with a large amount of flexibility. However, these dynamic features also have a cost, making it difficult to apply traditional static analysis techniques used in standard code analysis and transformation tools. As part of our work on creating analysis tools for PHP, we have conducted a study over a significant corpus of open-source PHP systems, looking at the sizes of actual PHP programs, which features of PHP are actually used, how often dynamic features appear, and how distributed these features are across the files that make up a PHP website. We have also looked at whether uses of these dynamic features are truly dynamic or are, in some cases, statically understandable, allowing us to identify specific patterns of use which can then be taken into account to build more precise tools. We believe this work will be of interest to creators of analysis tools for PHP, and that the methodology we present can be leveraged for other dynamic languages with similar features.",
"title": ""
},
{
"docid": "neg:1840177_17",
"text": "In this paper, we have conducted a literature review on the recent developments and publications involving the vehicle routing problem and its variants, namely vehicle routing problem with time windows (VRPTW) and the capacitated vehicle routing problem (CVRP) and also their variants. The VRP is classified as an NP-hard problem. Hence, the use of exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. The vehicle routing problem comes under combinatorial problem. Hence, to get solutions in determining routes which are realistic and very close to the optimal solution, we use heuristics and meta-heuristics. In this paper we discuss the various exact methods and the heuristics and meta-heuristics used to solve the VRP and its variants.",
"title": ""
},
{
"docid": "neg:1840177_18",
"text": "In patients with spinal cord injury, the primary or mechanical trauma seldom causes total transection, even though the functional loss may be complete. In addition, biochemical and pathological changes in the cord may worsen after injury. To explain these phenomena, the concept of the secondary injury has evolved for which numerous pathophysiological mechanisms have been postulated. This paper reviews the concept of secondary injury with special emphasis on vascular mechanisms. Evidence is presented to support the theory of secondary injury and the hypothesis that a key mechanism is posttraumatic ischemia with resultant infarction of the spinal cord. Evidence for the role of vascular mechanisms has been obtained from a variety of models of acute spinal cord injury in several species. Many different angiographic methods have been used for assessing microcirculation of the cord and for measuring spinal cord blood flow after trauma. With these techniques, the major systemic and local vascular effects of acute spinal cord injury have been identified and implicated in the etiology of secondary injury. The systemic effects of acute spinal cord injury include hypotension and reduced cardiac output. The local effects include loss of autoregulation in the injured segment of the spinal cord and a marked reduction of the microcirculation in both gray and white matter, especially in hemorrhagic regions and in adjacent zones. The microcirculatory loss extends for a considerable distance proximal and distal to the site of injury. Many studies have shown a dose-dependent reduction of spinal cord blood flow varying with the severity of injury, and a reduction of spinal cord blood flow which worsens with time after injury. The functional deficits due to acute spinal cord injury have been measured electrophysiologically with techniques such as motor and somatosensory evoked potentials and have been found proportional to the degree of posttraumatic ischemia. The histological effects include early hemorrhagic necrosis leading to major infarction at the injury site. These posttraumatic vascular effects can be treated. Systemic normotension can be restored with volume expansion or vasopressors, and spinal cord blood flow can be improved with dopamine, steroids, nimodipine, or volume expansion. The combination of nimodipine and volume expansion improves posttraumatic spinal cord blood flow and spinal cord function measured by evoked potentials. These results provide strong evidence that posttraumatic ischemia is an important secondary mechanism of injury, and that it can be counteracted.",
"title": ""
},
{
"docid": "neg:1840177_19",
"text": "868 NOTICES OF THE AMS VOLUME 47, NUMBER 8 In April 2000 the National Council of Teachers of Mathematics (NCTM) released Principles and Standards for School Mathematics—the culmination of a multifaceted, three-year effort to update NCTM’s earlier standards documents and to set forth goals and recommendations for mathematics education in the prekindergarten-through-grade-twelve years. As the chair of the Writing Group, I had the privilege to interact with all aspects of the development and review of this document and with the committed groups of people, including the members of the Writing Group, who contributed immeasurably to this process. This article provides some background about NCTM and the standards, the process of development, efforts to gather input and feedback, and ways in which feedback from the mathematics community influenced the document. The article concludes with a section that provides some suggestions for mathematicians who are interested in using Principles and Standards.",
"title": ""
}
] |
1840178 | A multi-source dataset of urban life in the city of Milan and the Province of Trentino | [
{
"docid": "pos:1840178_0",
"text": "In this paper we are concerned with the practical issues of working with data sets common to finance, statistics, and other related fields. pandas is a new library which aims to facilitate working with these data sets and to provide a set of fundamental building blocks for implementing statistical models. We will discuss specific design issues encountered in the course of developing pandas with relevant examples and some comparisons with the R language. We conclude by discussing possible future directions for statistical computing and data analysis using Python.",
"title": ""
},
{
"docid": "pos:1840178_1",
"text": "Human movements contribute to the transmission of malaria on spatial scales that exceed the limits of mosquito dispersal. Identifying the sources and sinks of imported infections due to human travel and locating high-risk sites of parasite importation could greatly improve malaria control programs. Here, we use spatially explicit mobile phone data and malaria prevalence information from Kenya to identify the dynamics of human carriers that drive parasite importation between regions. Our analysis identifies importation routes that contribute to malaria epidemiology on regional spatial scales.",
"title": ""
}
] | [
{
"docid": "neg:1840178_0",
"text": "The need for organizations to operate in changing environments is addressed by proposing an approach that integrates organizational development with information system (IS) development taking into account changes in the application context of the solution. This is referred to as Capability Driven Development (CDD). A meta-model representing business and IS designs consisting of goals, key performance indicators, capabilities, context and capability delivery patterns, is being proposed. The use of the meta-model is validated in three industrial case studies as part of an ongoing collaboration project, whereas one case is presented in the paper. Issues related to the use of the CDD approach, namely, CDD methodology and tool support are also discussed.",
"title": ""
},
{
"docid": "neg:1840178_1",
"text": "Interactions around money and financial services are a critical part of our lives on and off-line. New technologies and new ways of interacting with these technologies are of huge interest; they enable new business models and ways of making sense of this most important aspect of our everyday lives. At the same time, money is an essential element in HCI research and design. This workshop is intended to bring together researchers and practitioners involved in the design and use of systems that combine digital and new media with monetary and financial interactions to build on an understanding of these technologies and their impacts on users' behaviors. The workshop will focus on social, technical, and economic aspects around everyday user interactions with money and emerging financial technologies and systems.",
"title": ""
},
{
"docid": "neg:1840178_2",
"text": "A health risk appraisal function has been developed for the prediction of stroke using the Framingham Study cohort. The stroke risk factors included in the profile are age, systolic blood pressure, the use of antihypertensive therapy, diabetes mellitus, cigarette smoking, prior cardiovascular disease (coronary heart disease, cardiac failure, or intermittent claudication), atrial fibrillation, and left ventricular hypertrophy by electrocardiogram. Based on 472 stroke events occurring during 10 years' follow-up from biennial examinations 9 and 14, stroke probabilities were computed using the Cox proportional hazards model for each sex based on a point system. On the basis of the risk factors in the profile, which can be readily determined on routine physical examination in a physician's office, stroke risk can be estimated. An individual's risk can be related to the average risk of stroke for persons of the same age and sex. The information that one's risk of stroke is several times higher than average may provide the impetus for risk factor modification. It may also help to identify persons at substantially increased stroke risk resulting from borderline levels of multiple risk factors such as those with mild or borderline hypertension and facilitate multifactorial risk factor modification.",
"title": ""
},
{
"docid": "neg:1840178_3",
"text": "The Khanya project has been equipping schools and educators with ICT skills and equipment to be used in the curriculum delivery in South Africa. However, research and anecdotal evidence show that there is low adoption rate of ICT among educators in Khanya schools. This interpretive study sets out to analyse the factors which are preventing the educators from using the technology in their work. The perspective of limited access and/or use of ICT as deprivation of capabilities provides a conceptual base for this paper. We employed Sen’s Capability Approach as a conceptual lens to examine the educators’ situation regarding ICT for teaching and learning. Data was collected through in-depth interviews with fourteen educators and two Khanya personnel. The results of the study show that there are a number of factors (personal, social and environmental) which are preventing the educators from realising their potential capabilities from the ICT.",
"title": ""
},
{
"docid": "neg:1840178_4",
"text": "Humans have an innate tendency to anthropomorphize surrounding entities and have always been fascinated by the creation of machines endowed with human-inspired capabilities and traits. In the last few decades, this has become a reality with enormous advances in hardware performance, computer graphics, robotics technology, and artificial intelligence. New interdisciplinary research fields have brought forth cognitive robotics aimed at building a new generation of control systems and providing robots with social, empathetic and affective capabilities. This paper presents the design, implementation, and test of a human-inspired cognitive architecture for social robots. State-of-the-art design approaches and methods are thoroughly analyzed and discussed, cases where the developed system has been successfully used are reported. The tests demonstrated the system’s ability to endow a social humanoid robot with human social behaviors and with in-silico robotic emotions.",
"title": ""
},
{
"docid": "neg:1840178_5",
"text": "Planning radar sites is very important for several civilian and military applications. Depending on the security or defence issue different requirements exist regarding the radar coverage and the radar sites. QSiteAnalysis offers several functions to automate, improve and speed up this highly complex task. Wave propagation effects such as diffraction, refraction, multipath and atmospheric attenuation are considered for the radar coverage calculation. Furthermore, an automatic optimisation of the overall coverage is implemented by optimising the radar sites. To display the calculation result, the calculated coverage is visualised in 2D and 3D. Therefore, QSiteAnalysis offers several functions to improve and automate radar site studies.",
"title": ""
},
{
"docid": "neg:1840178_6",
"text": "Semantic Web Mining is the outcome of two new and fast developing domains: Semantic Web and Data Mining. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Data Mining is the nontrivial process of identifying valid, previously unknown, potentially useful patterns in data. Semantic Web Mining refers to the application of data mining techniques to extract knowledge from World Wide Web or the area of data mining that refers to the use of algorithms for extracting patterns from resources distributed over in the web. The aim of Semantic Web Mining is to discover and retrieve useful and interesting patterns from a huge set of web data. This web data consists of different kind of information, including web structure data, web log data and user profiles data. Semantic Web Mining is a relatively new area, broadly interdisciplinary, attracting researchers from: computer science, information retrieval specialists and experts from business studies fields. Web data mining includes web content mining, web structure mining and web usage mining. All of these approaches attempt to extract knowledge from the web, produce some useful results from the knowledge extracted and apply these results to the real world problems. To improve the internet service quality and increase the user click rate on a specific website, it is necessary for a web developer to know what the user really want to do, predict which pages the user is potentially interested in. In this paper, various techniques for Semantic Web mining like web content mining, web usage mining and web structure mining are discussed. Our main focus is on web usage mining and its application in web personalization. Study shows that the accuracy of recommendation system has improved significantly with the use of semantic web mining in web personalization.",
"title": ""
},
{
"docid": "neg:1840178_7",
"text": "Salsify is a new architecture for real-time Internet video that tightly integrates a video codec and a network transport protocol, allowing it to respond quickly to changing network conditions and avoid provoking packet drops and queueing delays. To do this, Salsify optimizes the compressed length and transmission time of each frame, based on a current estimate of the network’s capacity; in contrast, existing systems generally control longer-term metrics like frame rate or bit rate. Salsify’s per-frame optimization strategy relies on a purely functional video codec, which Salsify uses to explore alternative encodings of each frame at different quality levels. We developed a testbed for evaluating real-time video systems end-to-end with reproducible video content and network conditions. Salsify achieves lower video delay and, over variable network paths, higher visual quality than five existing systems: FaceTime, Hangouts, Skype, and WebRTC’s reference implementation with and without scalable video coding.",
"title": ""
},
{
"docid": "neg:1840178_8",
"text": "As a result of disparities in the educational system, numerous scholars and educators across disciplines currently support the STEAM (Science, Technology, Engineering, Art, and Mathematics) movement for arts integration. An educational approach to learning focusing on guiding student inquiry, dialogue, and critical thinking through interdisciplinary instruction, STEAM values proficiency, knowledge, and understanding. Despite extant literature urging for this integration, the trend has yet to significantly influence federal or state standards for K-12 education in the United States. This paper provides a brief and focused review of key theories and research from the fields of cognitive psychology and neuroscience outlining the benefits of arts integrative curricula in the classroom. Cognitive psychologists have found that the arts improve participant retention and recall through semantic elaboration, generation of information, enactment, oral production, effort after meaning, emotional arousal, and pictorial representation. Additionally, creativity is considered a higher-order cognitive skill and EEG results show novel brain patterns associated with creative thinking. Furthermore, cognitive neuroscientists have found that long-term artistic training can augment these patterns as well as lead to greater plasticity and neurogenesis in associated brain regions. Research suggests that artistic training increases retention and recall, generates new patterns of thinking, induces plasticity, and results in strengthened higher-order cognitive functions related to creativity. These benefits of arts integration, particularly as approached in the STEAM movement, are what develops students into adaptive experts that have the skills to then contribute to innovation in a variety of disciplines.",
"title": ""
},
{
"docid": "neg:1840178_9",
"text": "In this paper, we discuss wireless sensor and networking technologies for swarms of inexpensive aquatic surface drones in the context of the HANCAD project. The goal is to enable the swarm to perform maritime tasks such as sea-border patrolling and environmental monitoring, while keeping the cost of each drone low. Communication between drones is essential for the success of the project. Preliminary experiments show that XBee modules are promising for energy efficient multi-hop drone-to-drone communication.",
"title": ""
},
{
"docid": "neg:1840178_10",
"text": "Psychologists have repeatedly shown that a single statistical factor--often called \"general intelligence\"--emerges from the correlations among people's performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of \"collective intelligence\" exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group's performance on a wide variety of tasks. This \"c factor\" is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.",
"title": ""
},
{
"docid": "neg:1840178_11",
"text": "The impressive performance of utilizing deep learning or neural network has attracted much attention in both the industry and research communities, especially towards computer vision aspect related applications. Despite its superior capability of learning, generalization and interpretation on various form of input, micro-expression analysis field is yet remains new in applying this kind of computing system in automated expression recognition system. A new feature extractor, BiVACNN is presented in this paper, where it first estimates the optical flow fields from the apex frame, then encode the flow fields features using CNN. Concretely, the proposed method consists of three stages: apex frame acquisition, multivariate features formation and feature learning using CNN. In the multivariate features formation stage, we attempt to derive six distinct features from the apex details, which include: the apex itself, difference between the apex and onset frames, horizontal optical flow, vertical optical flow, magnitude and orientation. It is demonstrated that utilizing the horizontal and vertical optical flow capable to achieve 80% recognition accuracy in CASME II and SMIC-HS databases.",
"title": ""
},
{
"docid": "neg:1840178_12",
"text": "The manipulation of light-matter interactions in two-dimensional atomically thin crystals is critical for obtaining new optoelectronic functionalities in these strongly confined materials. Here, by integrating chemically grown monolayers of MoS2 with a silver-bowtie nanoantenna array supporting narrow surface-lattice plasmonic resonances, a unique two-dimensional optical system has been achieved. The enhanced exciton-plasmon coupling enables profound changes in the emission and excitation processes leading to spectrally tunable, large photoluminescence enhancement as well as surface-enhanced Raman scattering at room temperature. Furthermore, due to the decreased damping of MoS2 excitons interacting with the plasmonic resonances of the bowtie array at low temperatures stronger exciton-plasmon coupling is achieved resulting in a Fano line shape in the reflection spectrum. The Fano line shape, which is due to the interference between the pathways involving the excitation of the exciton and plasmon, can be tuned by altering the coupling strengths between the two systems via changing the design of the bowties lattice. The ability to manipulate the optical properties of two-dimensional systems with tunable plasmonic resonators offers a new platform for the design of novel optical devices with precisely tailored responses.",
"title": ""
},
{
"docid": "neg:1840178_13",
"text": "This paper presents an online feature selection mechanism for evaluating multiple features while tracking and adjusting the set of features used to improve tracking performance. Our hypothesis is that the features that best discriminate between object and background are also best for tracking the object. Given a set of seed features, we compute log likelihood ratios of class conditional sample densities from object and background to form a new set of candidate features tailored to the local object/background discrimination task. The two-class variance ratio is used to rank these new features according to how well they separate sample distributions of object and background pixels. This feature evaluation mechanism is embedded in a mean-shift tracking system that adaptively selects the top-ranked discriminative features for tracking. Examples are presented that demonstrate how this method adapts to changing appearances of both tracked object and scene background. We note susceptibility of the variance ratio feature selection method to distraction by spatially correlated background clutter and develop an additional approach that seeks to minimize the likelihood of distraction.",
"title": ""
},
{
"docid": "neg:1840178_14",
"text": "Smart buildings equipped with state-of-the-art sensors and meters are becoming more common. Large quantities of data are being collected by these devices. For a single building to benefit from its own collected data, it will need to wait for a long time to collect sufficient data to build accurate models to help improve the smart buildings systems. Therefore, multiple buildings need to cooperate to amplify the benefits from the collected data and speed up the model building processes. Apparently, this is not so trivial and there are associated challenges. In this paper, we study the importance of collaborative data analytics for smart buildings, its benefits, as well as presently possible models of carrying it out. Furthermore, we present a framework for collaborative fault detection and diagnosis as a case of collaborative data analytics for smart buildings. We also provide a preliminary analysis of the energy efficiency benefit of such collaborative framework for smart buildings. The result shows that significant energy savings can be achieved for smart buildings using collaborative data analytics.",
"title": ""
},
{
"docid": "neg:1840178_15",
"text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840178_16",
"text": "Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the system’s overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism and it is argued that commitments (pledges to undertake a specified course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.",
"title": ""
},
{
"docid": "neg:1840178_17",
"text": "A wavelet scattering network computes a translation invariant image representation which is stable to deformations and preserves high-frequency information for classification. It cascades wavelet transform convolutions with nonlinear modulus and averaging operators. The first network layer outputs SIFT-type descriptors, whereas the next layers provide complementary invariant information that improves classification. The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification. A scattering representation of stationary processes incorporates higher order moments and can thus discriminate textures having the same Fourier power spectrum. State-of-the-art classification results are obtained for handwritten digits and texture discrimination, with a Gaussian kernel SVM and a generative PCA classifier.",
"title": ""
},
{
"docid": "neg:1840178_18",
"text": "A substantial part of the operating costs of public transport is attributable to drivers, whose efficient use therefore is important. The compilation of optimal work packages is difficult, being NP-hard. In practice, algorithmic advances and enhanced computing power have led to significant progress in achieving better schedules. However, differences in labor practices among modes of transport and operating companies make production of a truly general system with acceptable performance a difficult proposition. TRACS II has overcome these difficulties, being used with success by a substantial number of bus and train operators. Many theoretical aspects of the system have been published previously. This paper shows for the first time how theory and practice have been brought together, explaining the many features which have been added to the algorithmic kernel to provide a user-friendly and adaptable system designed to provide maximum flexibility in practice. We discuss the extent to which users have been involved in system development, leading to many practical successes, and we summarize some recent achievements.",
"title": ""
},
{
"docid": "neg:1840178_19",
"text": "Three-level converters are becoming a realistic alternative to the conventional converters in high-power wind-energy applications. In this paper, a complete analytical strategy to model a back-to-back three-level converter is described. This tool permits us to adapt the control strategy to the specific application. Moreover, the model of different loads can be incorporated to the overall model. Both control strategy and load models are included in the complete system model. The proposed model pays special attention to the unbalance in the capacitors' voltage of three-level converters, including the dynamics of the capacitors' voltage. In order to validate the model and the control strategy proposed in this paper, a 3-MW three-level back-to-back power converter used as a power conditioning system of a variable speed wind turbine has been simulated. Finally, the described strategy has been implemented in a 50-kVA scalable prototype as well, providing a satisfactory performance",
"title": ""
}
] |
1840179 | Probabilistic sentential decision diagrams: Learning with massive logical constraints | [
{
"docid": "pos:1840179_0",
"text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.",
"title": ""
},
{
"docid": "pos:1840179_1",
"text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.",
"title": ""
}
] | [
{
"docid": "neg:1840179_0",
"text": "We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a triplet-based distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method.",
"title": ""
},
{
"docid": "neg:1840179_1",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "neg:1840179_2",
"text": "Mobile terrestrial laser scanners (MTLS), based on light detection and ranging sensors, are used worldwide in agricultural applications. MTLS are applied to characterize the geometry and the structure of plants and crops for technical and scientific purposes. Although MTLS exhibit outstanding performance, their high cost is still a drawback for most agricultural applications. This paper presents a low-cost alternative to MTLS based on the combination of a Kinect v2 depth sensor and a real time kinematic global navigation satellite system (GNSS) with extended color information capability. The theoretical foundations of this system are exposed along with some experimental results illustrating their performance and limitations. This study is focused on open-field agricultural applications, although most conclusions can also be extrapolated to similar outdoor uses. The developed Kinect-based MTLS system allows to select different acquisition frequencies and fields of view (FOV), from one to 512 vertical slices. The authors conclude that the better performance is obtained when a FOV of a single slice is used, but at the price of a very low measuring speed. With that particular configuration, plants, crops, and objects are reproduced accurately. Future efforts will be directed to increase the scanning efficiency by improving both the hardware and software components and to make it feasible using both partial and full FOV.",
"title": ""
},
{
"docid": "neg:1840179_3",
"text": "The paper presents a safe robot navigation system based on omnidirectional vision. The 360 degree camera images are analyzed for obstacle detection and avoidance and of course for navigating safely in the given indoor environment. This module can process images in real time and extracts the direction and distance information of the obstacles from the camera system mounted on the robot. This two data is the output of the module. Because of the distortions of the omnidirectional vision, it is necessary to calibrate the camera and not only for that but also to get the right direction and distances information. Several image processing methods and technics were used which are investigated in the rest of this paper.",
"title": ""
},
{
"docid": "neg:1840179_4",
"text": "In this paper, we present an evolutionary trust game to investigate the formation of trust in the so-called sharing economy from a population perspective. To the best of our knowledge, this is the first attempt to model trust in the sharing economy using the evolutionary game theory framework. Our sharing economy trust model consists of four types of players: a trustworthy provider, an untrustworthy provider, a trustworthy consumer, and an untrustworthy consumer. Through systematic simulation experiments, five different scenarios with varying proportions and types of providers and consumers were considered. Our results show that each type of players influences the existence and survival of other types of players, and untrustworthy players do not necessarily dominate the population even when the temptation to defect (i.e., to be untrustworthy) is high. Our findings may have important implications for understanding the emergence of trust in the context of sharing economy transactions.",
"title": ""
},
{
"docid": "neg:1840179_5",
"text": "In Chapter 1, Reigeluth described design theory as being different from descriptive theory in that it offers means to achieve goals. For an applied field like education, design theory is more useful and more easily applied than its descriptive counterpart, learning theory. But none of the 22 theories described in this book has yet been developed to a state of perfection; at very least they can all benefit from more detailed guidance for applying their methods to diverse situations. And more theories are sorely needed to provide guidance for additional kinds of learning and human development and for different kinds of situations, including the use of new information technologies as tools. This leads us to the important question, \" What research methods are most helpful for creating and improving instructional design theories? \" In this chapter, we offer a detailed description of one research methodology that holds much promise for generating the kind of knowledge that we believe is most useful to educators—a methodology that several theorists in this book have intuitively used to develop their theories. We refer to this methodology as \"formative research\"—a kind of developmental research or action research that is intended to improve design theory for designing instructional practices or processes. Reigeluth (1989) and Romiszowski (1988) have recommended this approach to expand the knowledge base in instructional-design theory. Newman (1990) has suggested something similar for research on the organizational impact of computers in schools. And Greeno, Collins and Resnick (1996) have identified several groups of researchers who are conducting something similar that they call \" design experiments, \" in which \" researchers and practitioners, particularly teachers, collaborate in the design, implementation, and analysis of changes in practice. \" (p. 15) Formative research has also been used for generating knowledge in as broad an area as systemic change in education We intend for this chapter to help guide educational researchers who are developing and refining instructional-design theories. Most researchers have not had the opportunity to learn formal research methodologies for developing design theories. Doctoral programs in universities tend to emphasize quantitative and qualitative research methodologies for creating descriptive knowledge of education. However, design theories are guidelines for practice, which tell us \"how to do\" education, not \"what is.\" We have found that traditional quantitative research methods (e.g., experiments, surveys, correlational analyses) are not particularly useful for improving instructional-design theory— especially in the early stages of development. Instead, …",
"title": ""
},
{
"docid": "neg:1840179_6",
"text": "This first installment of the new Human Augmentation department looks at various technologies designed to augment the human intellect and amplify human perception and cognition. Linking back to early work in interactive computing, Albrecht Schmidt considers how novel technologies can create a new relationship between digital technologies and humans.",
"title": ""
},
{
"docid": "neg:1840179_7",
"text": "Pupil diameter was monitored during picture viewing to assess effects of hedonic valence and emotional arousal on pupillary responses. Autonomic activity (heart rate and skin conductance) was concurrently measured to determine whether pupillary changes are mediated by parasympathetic or sympathetic activation. Following an initial light reflex, pupillary changes were larger when viewing emotionally arousing pictures, regardless of whether these were pleasant or unpleasant. Pupillary changes during picture viewing covaried with skin conductance change, supporting the interpretation that sympathetic nervous system activity modulates these changes in the context of affective picture viewing. Taken together, the data provide strong support for the hypothesis that the pupil's response during affective picture viewing reflects emotional arousal associated with increased sympathetic activity.",
"title": ""
},
{
"docid": "neg:1840179_8",
"text": "We investigate a simple yet effective method to introduce inhibitory and excitatory interactions between units in the layers of a deep neural network classifier. The method is based on the greedy layer-wise procedure of deep learning algorithms and extends the denoising autoencoder (Vincent et al., 2008) by adding asymmetric lateral connections between its hidden coding units, in a manner that is much simpler and computationally more efficient than previously proposed approaches. We present experiments on two character recognition problems which show for the first time that lateral connections can significantly improve the classification performance of deep networks.",
"title": ""
},
{
"docid": "neg:1840179_9",
"text": "Data is currently one of the most important assets for companies in every field. The continuous growth in the importance and volume of data has created a new problem: it cannot be handled by traditional analysis techniques. This problem was, therefore, solved through the creation of a new paradigm: Big Data. However, Big Data originated new issues related not only to the volume or the variety of the data, but also to data security and privacy. In order to obtain a full perspective of the problem, we decided to carry out an investigation with the objective of highlighting the main issues regarding Big Data security, and also the solutions proposed by the scientific community to solve them. In this paper, we explain the results obtained after applying a systematic mapping study to security in the Big Data ecosystem. It is almost impossible to carry out detailed research into the entire topic of security, and the outcome of this research is, therefore, a big picture of the main problems related to security in a Big Data system, along with the principal solutions to them proposed by the research community.",
"title": ""
},
{
"docid": "neg:1840179_10",
"text": "This paper introduces a complete design method to construct an adaptive fuzzy logic controller (AFLC) for DC–DC converter. In a conventional fuzzy logic controller (FLC), knowledge on the system supplied by an expert is required for developing membership functions (parameters) and control rules. The proposed AFLC, on the other hand, do not required expert for making parameters and control rules. Instead, parameters and rules are generated using a model data file, which contains summary of input–output pairs. The FLC use Mamdani type fuzzy logic controllers for the defuzzification strategy and inference operators. The proposed controller is designed and verified by digital computer simulation and then implemented for buck, boost and buck–boost converters by using an 8-bit microcontroller. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840179_11",
"text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.",
"title": ""
},
{
"docid": "neg:1840179_12",
"text": "Head pose estimation is a fundamental task for face and social related research. Although 3D morphable model (3DMM) based methods relying on depth information usually achieve accurate results, they usually require frontal or mid-profile poses which preclude a large set of applications where such conditions can not be garanteed, like monitoring natural interactions from fixed sensors placed in the environment. A major reason is that 3DMM models usually only cover the face region. In this paper, we present a framework which combines the strengths of a 3DMM model fitted online with a prior-free reconstruction of a 3D full head model providing support for pose estimation from any viewpoint. In addition, we also proposes a symmetry regularizer for accurate 3DMM fitting under partial observations, and exploit visual tracking to address natural head dynamics with fast accelerations. Extensive experiments show that our method achieves state-of-the-art performance on the public BIWI dataset, as well as accurate and robust results on UbiPose, an annotated dataset of natural interactions that we make public and where adverse poses, occlusions or fast motions regularly occur.",
"title": ""
},
{
"docid": "neg:1840179_13",
"text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process",
"title": ""
},
{
"docid": "neg:1840179_14",
"text": "Speeded visual word naming and lexical decision performance are reported for 2428 words for young adults and healthy older adults. Hierarchical regression techniques were used to investigate the unique predictive variance of phonological features in the onsets, lexical variables (e.g., measures of consistency, frequency, familiarity, neighborhood size, and length), and semantic variables (e.g. imageahility and semantic connectivity). The influence of most variables was highly task dependent, with the results shedding light on recent empirical controversies in the available word recognition literature. Semantic-level variables accounted for unique variance in both speeded naming and lexical decision performance, level with the latter task producing the largest semantic-level effects. Discussion focuses on the utility of large-scale regression studies in providing a complementary approach to the standard factorial designs to investigate visual word recognition.",
"title": ""
},
{
"docid": "neg:1840179_15",
"text": "Two competing encoding concepts are known to scale well with growing amounts of XML data: XPath Accelerator encoding implemented by MonetDB for in-memory documents and X-Hive’s Persistent DOM for on-disk storage. We identified two ways to improve XPath Accelerator and present prototypes for the respective techniques: BaseX boosts inmemory performance with optimized data and value index structures while Idefix introduces native block-oriented persistence with logarithmic update behavior for true scalability, overcoming main-memory constraints. An easy-to-use Java-based benchmarking framework was developed and used to consistently compare these competing techniques and perform scalability measurements. The established XMark benchmark was applied to all four systems under test. Additional fulltext-sensitive queries against the well-known DBLP database complement the XMark results. Not only did the latest version of X-Hive finally surprise with good scalability and performance numbers. Also, both BaseX and Idefix hold their promise to push XPath Accelerator to its limits: BaseX efficiently exploits available main memory to speedup XML queries while Idefix surpasses main-memory constraints and rivals the on-disk leadership of X-Hive. The competition between XPath Accelerator and Persistent DOM definitely is relaunched.",
"title": ""
},
{
"docid": "neg:1840179_16",
"text": "A new, systematic, simplified design procedure for quasi-Yagi antennas is presented. The design is based on the simple impedance matching among antenna components: i.e., transition, feed, and antenna. This new antenna design is possible due to the newly developed ultra-wideband transition. As design examples, wideband quasi- Yagi antennas are successfully designed and implemented in Ku- and Ka-bands with frequency bandwidths of 53.2% and 29.1%, and antenna gains of 4-5 dBi and 5.2-5.8 dBi, respectively. The design method can be applied to other balanced antennas and their arrays.",
"title": ""
},
{
"docid": "neg:1840179_17",
"text": "Women diagnosed with complete spinal cord injury (SCI) at T10 or higher report sensations generated by vaginal-cervical mechanical self-stimulation (CSS). In this paper we review brain responses to sexual arousal and orgasm in such women, and further hypothesize that the afferent pathway for this unexpected perception is provided by the Vagus nerves, which bypass the spinal cord. Using functional magnetic resonance imaging (fMRI), we ascertained that the region of the medulla oblongata to which the Vagus nerves project (the Nucleus of the Solitary Tract or NTS) is activated by CSS. We also used an objective measure, CSS-induced analgesia response to experimentally induced finger pain, to ascertain the functionality of this pathway. During CSS, several women experienced orgasms. Brain regions activated during orgasm included the hypothalamic paraventricular nucleus, amygdala, accumbens-bed nucleus of the stria terminalis-preoptic area, hippocampus, basal ganglia (especially putamen), cerebellum, and anterior cingulate, insular, parietal and frontal cortices, and lower brainstem (central gray, mesencephalic reticular formation, and NTS). We conclude that the Vagus nerves provide a spinal cord-bypass pathway for vaginal-cervical sensibility and that activation of this pathway can produce analgesia and orgasm.",
"title": ""
},
{
"docid": "neg:1840179_18",
"text": "Cyber attack comes in various approach and forms, either internally or externally. Remote access and spyware are forms of cyber attack leaving an organization to be susceptible to vulnerability. This paper investigates illegal activities and potential evidence of cyber attack through studying the registry on the Windows 7 Home Premium (32 bit) Operating System in using the application Virtual Network Computing (VNC) and keylogger application. The aim is to trace the registry artifacts left by the attacker which connected using Virtual Network Computing (VNC) protocol within Windows 7 Operating System (OS). The analysis of the registry focused on detecting unwanted applications or unauthorized access to the machine with regard to the user activity via the VNC connection for the potential evidence of illegal activities by investigating the Registration Entries file and image file using the Forensic Toolkit (FTK) Imager. The outcome of this study is the findings on the artifacts which correlate to the user activity.",
"title": ""
}
] |
1840180 | Depth camera tracking with contour cues | [
{
"docid": "pos:1840180_0",
"text": "Registering 2 or more range scans is a fundamental problem, with application to 3D modeling. While this problem is well addressed by existing techniques such as ICP when the views overlap significantly at a good initialization, no satisfactory solution exists for wide baseline registration. We propose here a novel approach which leverages contour coherence and allows us to align two wide baseline range scans with limited overlap from a poor initialization. Inspired by ICP, we maximize the contour coherence by building robust corresponding pairs on apparent contours and minimizing their distances in an iterative fashion. We use the contour coherence under a multi-view rigid registration framework, and this enables the reconstruction of accurate and complete 3D models from as few as 4 frames. We further extend it to handle articulations, and this allows us to model articulated objects such as human body. Experimental results on both synthetic and real data demonstrate the effectiveness and robustness of our contour coherence based registration approach to wide baseline range scans, and to 3D modeling.",
"title": ""
},
{
"docid": "pos:1840180_1",
"text": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.",
"title": ""
},
{
"docid": "pos:1840180_2",
"text": "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.",
"title": ""
}
] | [
{
"docid": "neg:1840180_0",
"text": "For learning in big datasets, the classification performance of ELM might be low due to input samples are not extracted features properly. To address this problem, the hierarchical extreme learning machine (H-ELM) framework was proposed based on the hierarchical learning architecture of multilayer perceptron. H-ELM composes of two parts; the first is the unsupervised multilayer encoding part and the second part is the supervised feature classification part. H-ELM can give higher accuracy rate than of the traditional ELM. However, it still has to enhance its classification performance. Therefore, this paper proposes a new method namely as the extending hierarchical extreme learning machine (EH-ELM). For the extended supervisor part of EH-ELM, we have got an idea from the two-layers extreme learning machine. To evaluate the performance of EH-ELM, three different image datasets; Semeion, MNIST, and NORB, were studied. The experimental results show that EH-ELM achieves better performance than of H-ELM and the other multi-layer framework.",
"title": ""
},
{
"docid": "neg:1840180_1",
"text": "Monocular 3D object parsing is highly desirable in various scenarios including occlusion reasoning and holistic scene interpretation. We present a deep convolutional neural network (CNN) architecture to localize semantic parts in 2D image and 3D space while inferring their visibility states, given a single RGB image. Our key insight is to exploit domain knowledge to regularize the network by deeply supervising its hidden layers, in order to sequentially infer intermediate concepts associated with the final task. To acquire training data in desired quantities with ground truth 3D shape and relevant concepts, we render 3D object CAD models to generate large-scale synthetic data and simulate challenging occlusion configurations between objects. We train the network only on synthetic data and demonstrate state-of-the-art performances on real image benchmarks including an extended version of KITTI, PASCAL VOC, PASCAL3D+ and IKEA for 2D and 3D keypoint localization and instance segmentation. The empirical results substantiate the utility of our deep supervision scheme by demonstrating effective transfer of knowledge from synthetic data to real images, resulting in less overfitting compared to standard end-to-end training.",
"title": ""
},
{
"docid": "neg:1840180_2",
"text": "A consequent pole, dual rotor, axial flux vernier permanent magnet (VPM) machine is developed to reduce magnet usage and increase torque density. Its end winding length is much shorter than that of regular VPM machines due to its toroidal winding configuration. The configurations and features of the proposed machine are discussed. Benefited from its vernier and consequent pole structure, this new machine exhibits much higher back-EMF and torque density than that of a regular dual rotor axial flux machine, while the magnet usage is halved. The influence of main design parameters, such as slot opening, ratio of inner to outer stator diameter, magnet thickness etc., on torque performance is analyzed based on the quasi-3-dimensional (quasi-3D) finite element analysis (FEA). The analyzing results are validated by real 3D FEA.",
"title": ""
},
{
"docid": "neg:1840180_3",
"text": "ECONOMISTS are frequently asked to measure the effects of an economic event on the value of firms. On the surface this seems like a difficult task, but a measure can be constructed easily using an event study. Using financial market data, an event study measures the impact of a specific event on the value of a firm. The usefulness of such a study comes from the fact that, given rationality in the marketplace, the effects of an event will be reflected immediately in security prices. Thus a measure of the event’s economic impact can be constructed using security prices observed over a relatively short time period. In contrast, direct productivity related measures may require many months or even years of observation. The event study has many applications. In accounting and finance research, event studies have been applied to a variety of firm specific and economy wide events. Some examples include mergers and acquisitions, earnings announcements, issues of new debt or equity, and announcements of macroeconomic variables such as the trade deficit.1 However, applications in other fields are also abundant. For example, event studies are used in the field of law and economics to measure the impact on the value of a firm of a change in the regulatory environment (see G. William Schwert 1981) and in legal liability cases event studies are used to assess damages (see Mark Mitchell and Jeffry Netter 1994). In the majority of applications, the focus is the effect of an event on the price of a particular class of securities of the firm, most often common equity. In this paper the methodology is discussed in terms of applications that use common equity. However, event studies can be applied using debt securities with little modification. Event studies have a long history. Perhaps the first published study is James Dolley (1933). In this work, he examines the price effects of stock splits, studying nominal price changes at the time of the split. Using a sample of 95 splits from 1921 to 1931, he finds that the price in-",
"title": ""
},
{
"docid": "neg:1840180_4",
"text": "In the last two decades, number of Higher Education Institutions (HEI) grows rapidly in India. Since most of the institutions are opened in private mode therefore, a cut throat competition rises among these institutions while attracting the student to got admission. This is the reason for institutions to focus on the strength of students not on the quality of education. This paper presents a data mining application to generate predictive models for engineering student’s dropout management. Given new records of incoming students, the predictive model can produce short accurate prediction list identifying students who tend to need the support from the student dropout program most. The results show that the machine learning algorithm is able to establish effective predictive model from the existing student dropout data. Keywords– Data Mining, Machine Learning Algorithms, Dropout Management and Predictive Models",
"title": ""
},
{
"docid": "neg:1840180_5",
"text": "The frequent and protracted use of video games with serious personal, family and social consequences is no longer just a pleasant pastime and could lead to mental and physical health problems. Although there is no official recognition of video game addiction on the Internet as a mild mental health disorder, further scientific research is needed.",
"title": ""
},
{
"docid": "neg:1840180_6",
"text": "This paper presents a simple, efficient, yet robust approach, named joint-scale local binary pattern (JLBP), for texture classification. In the proposed approach, the joint-scale strategy is developed firstly, and the neighborhoods of different scales are fused together by a simple arithmetic operation. And then, the descriptor is extracted from the mutual integration of the local patches based on the conventional local binary pattern (LBP). The proposed scheme can not only describe the micro-textures of a local structure, but also the macro-textures of a larger area because of the joint of multiple scales. Further, motivated by the completed local binary pattern (CLBP) scheme, the completed JLBP (CJLBP) is presented to enhance its power. The proposed descriptor is evaluated in relation to other recent LBP-based patterns and non-LBP methods on popular benchmark texture databases, Outex, CURet and UIUC. Generally, the experimental results show that the new method performs better than the state-of-the-art techniques.",
"title": ""
},
{
"docid": "neg:1840180_7",
"text": "In this paper we describe two methods for estimating the motion parameters of an image sequence. For a sequence of images, the global motion can be described by independent motion models. On the other hand, in a sequence there exist as many as \u000e pairwise relative motion constraints that can be solve for efficiently. In this paper we show how to linearly solve for consistent global motion models using this highly redundant set of constraints. In the first case, our method involves estimating all available pairwise relative motions and linearly fitting a global motion model to these estimates. In the second instance, we exploit the fact that algebraic (ie. epipolar) constraints between various image pairs are all related to each other by the global motion model. This results in an estimation method that directly computes the motion of the sequence by using all possible algebraic constraints. Unlike using reprojection error, our optimisation method does not solve for the structure of points resulting in a reduction of the dimensionality of the search space. Our algorithms are used for both 3D camera motion estimation and camera calibration. We provide real examples of both applications.",
"title": ""
},
{
"docid": "neg:1840180_8",
"text": "Queueing network models have proved to be cost effectwe tools for analyzing modern computer systems. This tutorial paper presents the basic results using the operational approach, a framework which allows the analyst to test whether each assumption is met in a given system. The early sections describe the nature of queueing network models and their apphcations for calculating and predicting performance quantitms The basic performance quantities--such as utilizations, mean queue lengths, and mean response tunes--are defined, and operatmnal relationships among them are derwed Following this, the concept of job flow balance is introduced and used to study asymptotic throughputs and response tunes. The concepts of state transition balance, one-step behavior, and homogeneity are then used to relate the proportions of time that each system state is occupied to the parameters of job demand and to dewce charactenstms Efficmnt methods for computing basic performance quantities are also described. Finally the concept of decomposition is used to stmphfy analyses by replacing subsystems with equivalent devices. All concepts are illustrated liberally with examples",
"title": ""
},
{
"docid": "neg:1840180_9",
"text": "Modern approach to the FOREX currency exchange market requires support from the computer algorithms to manage huge volumes of the transactions and to find opportunities in a vast number of currency pairs traded daily. There are many well known techniques used by market participants on both FOREX and stock-exchange markets (i.e. Fundamental and technical analysis) but nowadays AI based techniques seem to play key role in the automated transaction and decision supporting systems. This paper presents the comprehensive analysis over Feed Forward Multilayer Perceptron (ANN) parameters and their impact to accurately forecast FOREX trend of the selected currency pair. The goal of this paper is to provide information on how to construct an ANN with particular respect to its parameters and training method to obtain the best possible forecasting capabilities. The ANN parameters investigated in this paper include: number of hidden layers, number of neurons in hidden layers, use of constant/bias neurons, activation functions, but also reviews the impact of the training methods in the process of the creating reliable and valuable ANN, useful to predict the market trends. The experimental part has been performed on the historical data of the EUR/USD pair.",
"title": ""
},
{
"docid": "neg:1840180_10",
"text": "From email to online banking, passwords are an essential component of modern internet use. Yet, users do not always have good password security practices, leaving their accounts vulnerable to attack. We conducted a study which combines self-report survey responses with measures of actual online behavior gathered from 134 participants over the course of six weeks. We find that people do tend to re-use each password on 1.7–3.4 different websites, they reuse passwords that are more complex, and mostly they tend to re-use passwords that they have to enter frequently. We also investigated whether self-report measures are accurate indicators of actual behavior, finding that though people understand password security, their self-reported intentions have only a weak correlation with reality. These findings suggest that users manage the challenge of having many passwords by choosing a complex password on a website where they have to enter it frequently in order to memorize that password, and then re-using that strong password across other websites.",
"title": ""
},
{
"docid": "neg:1840180_11",
"text": "New mothers can experience social exclusion, particularly during the early weeks when infants are solely dependent on their mothers. We used ethnographic methods to investigate whether technology plays a role in supporting new mothers. Our research identified two core themes: (1) the need to improve confidence as a mother; and (2) the need to be more than \\'18just' a mother. We reflect on these findings both in terms of those interested in designing applications and services for motherhood and also the wider CHI community.",
"title": ""
},
{
"docid": "neg:1840180_12",
"text": "Plants are affected by complex genome×environment×management interactions which determine phenotypic plasticity as a result of the variability of genetic components. Whereas great advances have been made in the cost-efficient and high-throughput analyses of genetic information and non-invasive phenotyping, the large-scale analyses of the underlying physiological mechanisms lag behind. The external phenotype is determined by the sum of the complex interactions of metabolic pathways and intracellular regulatory networks that is reflected in an internal, physiological, and biochemical phenotype. These various scales of dynamic physiological responses need to be considered, and genotyping and external phenotyping should be linked to the physiology at the cellular and tissue level. A high-dimensional physiological phenotyping across scales is needed that integrates the precise characterization of the internal phenotype into high-throughput phenotyping of whole plants and canopies. By this means, complex traits can be broken down into individual components of physiological traits. Since the higher resolution of physiological phenotyping by 'wet chemistry' is inherently limited in throughput, high-throughput non-invasive phenotyping needs to be validated and verified across scales to be used as proxy for the underlying processes. Armed with this interdisciplinary and multidimensional phenomics approach, plant physiology, non-invasive phenotyping, and functional genomics will complement each other, ultimately enabling the in silico assessment of responses under defined environments with advanced crop models. This will allow generation of robust physiological predictors also for complex traits to bridge the knowledge gap between genotype and phenotype for applications in breeding, precision farming, and basic research.",
"title": ""
},
{
"docid": "neg:1840180_13",
"text": "!is paper presents a novel methodological approach of how to design, conduct and analyse robot-assisted play. !is approach is inspired by non-directive play therapy. !e experimenter participates in the experiments, but the child remains the main leader for play. Besides, beyond inspiration from non-directive play therapy, this approach enables the experimenter to regulate the interaction under speci\"c conditions in order to guide the child or ask her questions about reasoning or a#ect related to the robot. !is approach has been tested in a longterm study with six children with autism in a school setting. An autonomous robot with zoomorphic, dog-like appearance was used in the studies. !e children’s progress was analyzed according to three dimensions, namely, Play, Reasoning and A#ect. Results from the case-study evaluations have shown the capability of the method to meet each child’s needs and abilities. Children who mainly played solitarily progressively experienced basic imitation games with the experimenter. Children who proactively played socially progressively experienced higher levels of play and constructed more reasoning related to the robot. !ey also expressed some interest in the robot, including, on occasion, a#ect.",
"title": ""
},
{
"docid": "neg:1840180_14",
"text": "We study the effectiveness of neural sequence models for premise selection in automated theorem proving, one of the main bottlenecks in the formalization of mathematics. We propose a two stage approach for this task that yields good results for the premise selection task on the Mizar corpus while avoiding the handengineered features of existing state-of-the-art models. To our knowledge, this is the first time deep learning has been applied to theorem proving on a large scale.",
"title": ""
},
{
"docid": "neg:1840180_15",
"text": "Out-of-autoclave (OoA) prepreg materials and methods have gained acceptance over the past decade because of the ability to produce autoclave-quality components under vacuum-bag-only (VBO) cure. To achieve low porosity and tight dimensional tolerances, VBO prepregs rely on specific microstructural features and processing techniques. Furthermore, successful cure is contingent upon appropriate material property and process parameter selection. In this article, we review the existing literature on VBO prepreg processing to summarize and synthesize knowledge on these issues. First, the context, development, and defining properties of VBO prepregs are presented. The key processing phenomena and the influence on quality are subsequently described. Finally, cost and environmental performance are considered. Throughout, we highlight key considerations for VBO prepreg processing and identify areas where further study is required.",
"title": ""
},
{
"docid": "neg:1840180_16",
"text": "Due to the rapid development of mobile social networks, mobile big data play an important role in providing mobile social users with various mobile services. However, as mobile big data have inherent properties, current MSNs face a challenge to provide mobile social user with a satisfactory quality of experience. Therefore, in this article, we propose a novel framework to deliver mobile big data over content- centric mobile social networks. At first, the characteristics and challenges of mobile big data are studied. Then the content-centric network architecture to deliver mobile big data in MSNs is presented, where each datum consists of interest packets and data packets, respectively. Next, how to select the agent node to forward interest packets and the relay node to transmit data packets are given by defining priorities of interest packets and data packets. Finally, simulation results show the performance of our framework with varied parameters.",
"title": ""
},
{
"docid": "neg:1840180_17",
"text": "Using the information systems lifecycle as a unifying framework, we review online communities research and propose a sequence for incorporating success conditions during initiation and development to increase their chances of becoming a successful community, one in which members participate actively and develop lasting relationships. Online communities evolve following distinctive lifecycle stages and recommendations for success are more or less relevant depending on the developmental stage of the online community. In addition, the goal of the online community under study determines the components to include in the development of a successful online community. Online community builders and researchers will benefit from this review of the conditions that help online communities succeed.",
"title": ""
},
{
"docid": "neg:1840180_18",
"text": "DNN-based cross-modal retrieval is a research hotspot to retrieve across different modalities as image and text, but existing methods often face the challenge of insufficient cross-modal training data. In single-modal scenario, similar problem is usually relieved by transferring knowledge from largescale auxiliary datasets (as ImageNet). Knowledge from such single-modal datasets is also very useful for cross-modal retrieval, which can provide rich general semantic information that can be shared across different modalities. However, it is challenging to transfer useful knowledge from single-modal (as image) source domain to cross-modal (as image/text) target domain. Knowledge in source domain cannot be directly transferred to both two different modalities in target domain, and the inherent cross-modal correlation contained in target domain provides key hints for cross-modal retrieval which should be preserved during transfer process. This paper proposes Cross-modal Hybrid Transfer Network (CHTN) with two subnetworks: Modalsharing transfer subnetwork utilizes the modality in both source and target domains as a bridge, for transferring knowledge to both two modalities simultaneously; Layer-sharing correlation subnetwork preserves the inherent cross-modal semantic correlation to further adapt to cross-modal retrieval task. Cross-modal data can be converted to common representation by CHTN for retrieval, and comprehensive experiments on 3 datasets show its effectiveness.",
"title": ""
},
{
"docid": "neg:1840180_19",
"text": "Recent breakthroughs in computational capabilities and optimization algorithms have enabled a new class of signal processing approaches based on deep neural networks (DNNs). These algorithms have been extremely successful in the classification of natural images, audio, and text data. In particular, a special type of DNNs, called convolutional neural networks (CNNs) have recently shown superior performance for object recognition in image processing applications. This paper discusses modern training approaches adopted from the image processing literature and shows how those approaches enable significantly improved performance for synthetic aperture radar (SAR) automatic target recognition (ATR). In particular, we show how a set of novel enhancements to the learning algorithm, based on new stochastic gradient descent approaches, generate significant classification improvement over previously published results on a standard dataset called MSTAR.",
"title": ""
}
] |
1840181 | Modelling IT projects success with Fuzzy Cognitive Maps | [
{
"docid": "pos:1840181_0",
"text": "Fuzzy cognitive maps (FCMs) are fuzzy-graph structures for representing causal reasoning. Their fuzziness allows hazy degrees of causality between hazy causal objects (concepts). Their graph structure allows systematic causal propagation, in particular forward and backward chaining, and it allows knowledge bases to be grown by connecting different FCMs. FCMs are especially applicable to soft knowledge domains and several example FCMs are given. Causality is represented as a fuzzy relation on causal concepts. A fuzzy causal algebra for governing causal propagation on FCMs is developed. FCM matrix representation and matrix operations are presented in the Appendix.",
"title": ""
}
] | [
{
"docid": "neg:1840181_0",
"text": "Stack Overflow is widely regarded as the most popular Community driven Question Answering (CQA) website for programmers. Questions posted on Stack Overflow which are not related to programming topics, are marked as `closed' by experienced users and community moderators. A question can be `closed' for five reasons -- duplicate, off-topic, subjective, not a real question and too localized. In this work, we present the first study of `closed' questions on Stack Overflow. We download 4 years of publicly available data which contains 3.4 Million questions. We first analyze and characterize the complete set of 0.1 Million `closed' questions. Next, we use a machine learning framework and build a predictive model to identify a `closed' question at the time of question creation.\n One of our key findings is that despite being marked as `closed', subjective questions contain high information value and are very popular with the users. We observe an increasing trend in the percentage of closed questions over time and find that this increase is positively correlated to the number of newly registered users. In addition, we also see a decrease in community participation to mark a `closed' question which has led to an increase in moderation job time. We also find that questions closed with the Duplicate and Off Topic labels are relatively more prone to reputation gaming. Our analysis suggests broader implications for content quality maintenance on CQA websites. For the `closed' question prediction task, we make use of multiple genres of feature sets based on - user profile, community process, textual style and question content. We use a state-of-art machine learning classifier based on an ensemble framework and achieve an overall accuracy of 70.3%. Analysis of the feature space reveals that `closed' questions are relatively less informative and descriptive than non-`closed' questions. To the best of our knowledge, this is the first experimental study to analyze and predict `closed' questions on Stack Overflow.",
"title": ""
},
{
"docid": "neg:1840181_1",
"text": "Toward materializing the recently identified potential of cognitive neuroscience for IS research (Dimoka, Pavlou and Davis 2007), this paper demonstrates how functional neuroimaging tools can enhance our understanding of IS theories. Specifically, this study aims to uncover the neural mechanisms that underlie technology adoption by identifying the brain areas activated when users interact with websites that differ on their level of usefulness and ease of use. Besides localizing the neural correlates of the TAM constructs, this study helps understand their nature and dimensionality, as well as uncover hidden processes associated with intentions to use a system. The study also identifies certain technological antecedents of the TAM constructs, and shows that the brain activations associated with perceived usefulness and perceived ease of use predict selfreported intentions to use a system. The paper concludes by discussing the study’s implications for underscoring the potential of functional neuroimaging for IS research and the TAM literature.",
"title": ""
},
{
"docid": "neg:1840181_2",
"text": "An extensive data search among various types of developmental and evolutionary sequences yielded a `four quadrant' model of consciousness and its development (the four quadrants being intentional, behavioural, cultural, and social). Each of these dimensions was found to unfold in a sequence of at least a dozen major stages or levels. Combining the four quadrants with the dozen or so major levels in each quadrant yields an integral theory of consciousness that is quite comprehensive in its nature and scope. This model is used to indicate how a general synthesis and integration of twelve of the most influential schools of consciousness studies can be effected, and to highlight some of the most significant areas of future research. The conclusion is that an `all-quadrant, all-level' approach is the minimum degree of sophistication that we need into order to secure anything resembling a genuinely integral theory of consciousness.",
"title": ""
},
{
"docid": "neg:1840181_3",
"text": "Phytoremediation is an important process in the removal of heavy metals and contaminants from the soil and the environment. Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Phytoremediation in phytoextraction is a major technique. In this process is the use of plants or algae to remove contaminants in the soil, sediment or water in the harvesting of plant biomass. Heavy metal is generally known set of elements with atomic mass (> 5 gcm -3), particularly metals such as exchange of cadmium, lead and mercury. Between different pollutant cadmium (Cd) is the most toxic and plant and animal heavy metals. Mustard (Brassica juncea L.) and Sunflower (Helianthus annuus L.) are the plant for the production of high biomass and rapid growth, and it seems that the appropriate species for phytoextraction because it can compensate for the low accumulation of cadmium with a much higher biomass yield. To use chelators, such as acetic acid, ethylene diaminetetraacetic acid (EDTA), and to increase the solubility of metals in the soil to facilitate easy availability indiscernible and the absorption of the plant from root leg in vascular plants. *Corresponding Author: Awais Shakoor awais.shakoor22@gmail.com Journal of Biodiversity and Environmental Sciences (JBES) ISSN: 2220-6663 (Print) 2222-3045 (Online) Vol. 10, No. 3, p. 88-98, 2017 http://www.innspub.net J. Bio. Env. Sci. 2017 89 | Shakoor et al. Introduction Phytoremediation consists of Greek and words of \"station\" and Latin remedium plants, which means \"rebalancing\" describes the treatment of environmental problems treatment (biological) through the use of plants that mitigate the environmental problem without digging contaminated materials and disposed of elsewhere. Controlled by the plant interactions with groundwater and organic and inorganic contaminated materials in specific locations to achieve therapeutic targets molecules site application (Landmeyer, 2011). Phytoremediation is the use of green plants to remove contaminants from the environment or render them harmless. The technology that uses plants to\" green space \"of heavy metals in the soil through the roots. While vacuum cleaners and you should be able to withstand and survive high levels of heavy metals in the soil unique plants (Baker, 2000). The main result in increasing the population and more industrialization are caused water and soil contamination that is harmful for environment as well as human health. In the whole world, contamination in the soil by heavy metals has become a very serious issue. So, removal of these heavy metals from the soil is very necessary to protect the soil and human health. Both inorganic and organic contaminants, like petroleum, heavy metals, agricultural waste, pesticide and fertilizers are the main source that deteriorate the soil health (Chirakkara et al., 2016). Heavy metals have great role in biological system, so we can divide into two groups’ essentials and non essential. Those heavy metals which play a vital role in biochemical and physiological function in some living organisms are called essential heavy metals, like zinc (Zn), nickel (Ni) and cupper (Cu) (Cempel and Nikel, 2006). In some living organisms, heavy metals don’t play any role in biochemical as well as physiological functions are called non essential heavy metals, such as mercury (Hg), lead (Pb), arsenic (As), and Cadmium (Cd) (Dabonne et al., 2010). Cadmium (Cd) is consider as a non essential heavy metal that is more toxic at very low concentration as compare to other non essential heavy metals. It is toxic to plant, human and animal health. Cd causes serious diseases in human health through the food chain (Rafiq et al., 2014). So, removal of Cd from the soil is very important problem to overcome these issues (Neilson and Rajakaruna, 2015). Several methods are used to remove the Cd from the soil, such as physical, chemical and physiochemical to increase the soil pH (Liu et al., 2015). The main source of Cd contamination in the soil and environment is automobile emissions, batteries and commercial fertilizers (Liu et al., 2015). Phytoremediation is a promising technique that is used in removing the heavy metals form the soil (Ma et al., 2011). Plants update the heavy metals through the root and change the soil properties which are helpful in increasing the soil fertility (Mench et al., 2009). Plants can help clean many types of pollution, including metals, pesticides, explosives, and oil. Plants also help prevent wind and rain, groundwater and implementation of pollution off site to other areas. Phytoremediation works best in locations with low to moderate amounts of pollution. Plants absorb harmful chemicals from the soil when the roots take in water and nutrients from contaminated soils, streams and groundwater. Once inside the plant and chemicals can be stored in the roots, stems, or leaves. Change of less harmful chemicals within the plant. Or a change in the gases that are released into the air as a candidate plant Agency (US Environmental Protection, 2001). Phytoremediation is the direct use of living green plants and minutes to stabilize or reduce pollution in soil, sludge, sediment, surface water or groundwater bodies with low concentrations of pollutants a large clean space and shallow depths site offers favorable treatment plant (associated with US Environmental Protection Agency 0.2011) circumstances. Phytoremediation is the use of plants for the treatment of contaminated soil sites and sediments J. Bio. Env. Sci. 2017 90 | Shakoor et al. and water. It is best applied at sites of persistent organic pollution with shallow, nutrient, or metal. Phytoremediation is an emerging technology for contaminated sites is attractive because of its low cost and versatility (Schnoor, 1997). Contaminated soils on the site using the processing plants. Phytoremediation is a plant that excessive accumulation of metals in contaminated soils in growth (National Research Council, 1997). Phytoremediation to facilitate the concentration of pollutants in contaminated soil, water or air is composed, and plants able to contain, degrade or eliminate metals, pesticides, solvents, explosives, crude oil and its derivatives, and other contaminants in the media that contain them. Phytoremediation have several techniques and these techniques depend on different factors, like soil type, contaminant type, soil depth and level of ground water. Special operation situations and specific technology applied at the contaminated site (Hyman and Dupont 2001). Techniques of phytoremediation Different techniques are involved in phytoremediation, such as phytoextraction, phytostabilisation, phytotransformation, phytostimulation, phytovolatilization, and rhizofiltration. Phytoextraction Phytoextraction is also called phytoabsorption or phytoaccumulation, in this technique heavy metals are removed by up taking through root form the water and soil environment, and accumulated into the shoot part (Rafati et al., 2011). Phytostabilisation Phytostabilisation is also known as phytoimmobilization. In this technique different type of plants are used for stabilization the contaminants from the soil environment (Ali et al., 2013). By using this technique, the bioavailability and mobility of the different contaminants are reduced. So, this technique is help to avoiding their movement into food chain as well as into ground water (Erakhrumen, 2007). Nevertheless, Phytostabilisation is the technique by which movement of heavy metals can be stop but its not permanent solution to remove the contamination from the soil. Basically, phytostabilisation is the management approach for inactivating the potential of toxic heavy metals form the soil environment contaminants (Vangronsveld et al., 2009).",
"title": ""
},
{
"docid": "neg:1840181_4",
"text": "ICN has received a lot of attention in recent years, and is a promising approach for the Future Internet design. As multimedia is the dominating traffic in today's and (most likely) the Future Internet, it is important to consider this type of data transmission in the context of ICN. In particular, the adaptive streaming of multimedia content is a promising approach for usage within ICN, as the client has full control over the streaming session and has the possibility to adapt the multimedia stream to its context (e.g. network conditions, device capabilities), which is compatible with the paradigms adopted by ICN. In this article we investigate the implementation of adaptive multimedia streaming within networks adopting the ICN approach. In particular, we present our approach based on the recently ratified ISO/IEC MPEG standard Dynamic Adaptive Streaming over HTTP and the ICN representative Content-Centric Networking, including baseline evaluations and open research challenges.",
"title": ""
},
{
"docid": "neg:1840181_5",
"text": "Neuromyelitis optica (NMO) is an inflammatory CNS syndrome distinct from multiple sclerosis (MS) that is associated with serum aquaporin-4 immunoglobulin G antibodies (AQP4-IgG). Prior NMO diagnostic criteria required optic nerve and spinal cord involvement but more restricted or more extensive CNS involvement may occur. The International Panel for NMO Diagnosis (IPND) was convened to develop revised diagnostic criteria using systematic literature reviews and electronic surveys to facilitate consensus. The new nomenclature defines the unifying term NMO spectrum disorders (NMOSD), which is stratified further by serologic testing (NMOSD with or without AQP4-IgG). The core clinical characteristics required for patients with NMOSD with AQP4-IgG include clinical syndromes or MRI findings related to optic nerve, spinal cord, area postrema, other brainstem, diencephalic, or cerebral presentations. More stringent clinical criteria, with additional neuroimaging findings, are required for diagnosis of NMOSD without AQP4IgG or when serologic testing is unavailable. The IPND also proposed validation strategies and achieved consensus on pediatric NMOSD diagnosis and the concepts of monophasic NMOSD and opticospinal MS. Neurology® 2015;85:1–13 GLOSSARY ADEM 5 acute disseminated encephalomyelitis; AQP4 5 aquaporin-4; IgG 5 immunoglobulin G; IPND 5 International Panel for NMO Diagnosis; LETM 5 longitudinally extensive transverse myelitis lesions; MOG 5 myelin oligodendrocyte glycoprotein; MS 5 multiple sclerosis; NMO 5 neuromyelitis optica; NMOSD 5 neuromyelitis optica spectrum disorders; SLE 5 systemic lupus erythematosus; SS 5 Sjögren syndrome. Neuromyelitis optica (NMO) is an inflammatory CNS disorder distinct from multiple sclerosis (MS). It became known as Devic disease following a seminal 1894 report. Traditionally, NMO was considered a monophasic disorder consisting of simultaneous bilateral optic neuritis and transverse myelitis but relapsing cases were described in the 20th century. MRI revealed normal brain scans and$3 vertebral segment longitudinally extensive transverse myelitis lesions (LETM) in NMO. The nosology of NMO, especially whether it represented a topographically restricted form of MS, remained controversial. A major advance was the discovery that most patients with NMO have detectable serum antibodies that target the water channel aquaporin-4 (AQP4–immunoglobulin G [IgG]), are highly specific for clinically diagnosed NMO, and have pathogenic potential. In 2006, AQP4-IgG serology was incorporated into revised NMO diagnostic criteria that relaxed clinical From the Departments of Neurology (D.M.W.) and Library Services (K.E.W.), Mayo Clinic, Scottsdale, AZ; the Children’s Hospital of Philadelphia (B.B.), PA; the Departments of Neurology and Ophthalmology (J.L.B.), University of Colorado Denver, Aurora; the Service de Neurologie (P.C.), Centre Hospitalier Universitaire de Fort de France, Fort-de-France, Martinique; Department of Neurology (W.C.), Sir Charles Gairdner Hospital, Perth, Australia; the Department of Neurology (T.C.), Massachusetts General Hospital, Boston; the Department of Neurology (J.d.S.), Strasbourg University, France; the Department of Multiple Sclerosis Therapeutics (K.F.), Tohoku University Graduate School of Medicine, Sendai, Japan; the Departments of Neurology and Neurotherapeutics (B.G.), University of Texas Southwestern Medical Center, Dallas; The Walton Centre NHS Trust (A.J.), Liverpool, UK; the Molecular Neuroimmunology Group, Department of Neurology (S.J.), University Hospital Heidelberg, Germany; the Center for Multiple Sclerosis Investigation (M.L.-P.), Federal University of Minas Gerais Medical School, Belo Horizonte, Brazil; the Department of Neurology (M.L.), Johns Hopkins University, Baltimore, MD; Portland VA Medical Center and Oregon Health and Sciences University (J.H.S.), Portland; the Department of Neurology (S.T.), National Pediatric Hospital Dr. Juan P. Garrahan, Buenos Aires, Argentina; the Department of Medicine (A.L.T.), University of British Columbia, Vancouver, Canada; Nuffield Department of Clinical Neurosciences (P.W.), University of Oxford, UK; and the Department of Neurology (B.G.W.), Mayo Clinic, Rochester, MN. Go to Neurology.org for full disclosures. Funding information and disclosures deemed relevant by the authors, if any, are provided at the end of the article. The Article Processing Charge was paid by the Guthy-Jackson Charitable Foundation. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND), which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially. © 2015 American Academy of Neurology 1 a 2015 American Academy of Neurology. Unauthorized reproduction of this article is prohibited. Published Ahead of Print on June 19, 2015 as 10.1212/WNL.0000000000001729",
"title": ""
},
{
"docid": "neg:1840181_6",
"text": "In today’s world, there is a continuous global need for more energy which, at the same time, has to be cleaner than the energy produced from the traditional generation technologies. This need has facilitated the increasing penetration of distributed generation (DG) technologies and primarily of renewable energy sources (RES). The extensive use of such energy sources in today’s electricity networks can indisputably minimize the threat of global warming and climate change. However, the power output of these energy sources is not as reliable and as easy to adjust to changing demand cycles as the output from the traditional power sources. This disadvantage can only be effectively overcome by the storing of the excess power produced by DG-RES. Therefore, in order for these new sources to become completely reliable as primary sources of energy, energy storage is a crucial factor. In this work, an overview of the current and future energy storage technologies used for electric power applications is carried out. Most of the technologies are in use today while others are still under intensive research and development. A comparison between the various technologies is presented in terms of the most important technological characteristics of each technology. The comparison shows that each storage technology is different in terms of its ideal network application environment and energy storage scale. This means that in order to achieve optimum results, the unique network environment and the specifications of the storage device have to be studied thoroughly, before a decision for the ideal storage technology to be selected is taken. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840181_7",
"text": "Many people have a strong intuition that there is something morallyobjectionable about playing violent video games, particularly withincreases in the number of people who are playing them and the games'alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethicaltheories to analyze the possibility that there might be some philosophicalfoundation for these intuitions. I raise the broader question of whetheror not participating in authentic simulations of immoral acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian hassubstantial objections to violent game playing, although they offersome important insights into playing games in general and what it ismorally to be a ``good sport.'' The Aristotelian, however, has a plausibleand intuitive way to protest participation in authentic simulations ofviolent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live afulfilled eudaimonic life.",
"title": ""
},
{
"docid": "neg:1840181_8",
"text": "Telecommunication sector generates a huge amount of data due to increasing number of subscribers, rapidly renewable technologies; data based applications and other value added service. This data can be usefully mined for churn analysis and prediction. Significant research had been undertaken by researchers worldwide to understand the data mining practices that can be used for predicting customer churn. This paper provides a review of around 100 recent journal articles starting from year 2000 to present the various data mining techniques used in multiple customer based churn models. It then summarizes the existing telecom literature by highlighting the sample size used, churn variables employed and the findings of different DM techniques. Finally, we list the most popular techniques for churn prediction in telecom as decision trees, regression analysis and clustering, thereby providing a roadmap to new researchers to build upon novel churn management models.",
"title": ""
},
{
"docid": "neg:1840181_9",
"text": "Frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answering and social network analysis. Predicting such representations from raw text is, however, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. In this paper, we present a semantic role labeling system that takes into account sentence and discourse context. We introduce several new features which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in FrameNet-based semantic role labeling.",
"title": ""
},
{
"docid": "neg:1840181_10",
"text": "Due to a wide range of applications, wireless sensor networks (WSNs) have recently attracted a lot of interest to the researchers. Limited computational capacity and power usage are two major challenges to ensure security in WSNs. Recently, more secure communication or data aggregation techniques have discovered. So, familiarity with the current research in WSN security will benefit researchers greatly. In this paper, security related issues and challenges in WSNs are investigated. We identify the security threats and review proposed security mechanisms for WSNs. Moreover, we provide a brief discussion on the future research direction in WSN security.",
"title": ""
},
{
"docid": "neg:1840181_11",
"text": "In acoustic modeling for large vocabulary continuous speech recognition, it is essential to model long term dependency within speech signals. Usually, recurrent neural network (RNN) architectures, especially the long short term memory (LSTM) models, are the most popular choice. Recently, a novel architecture, namely feedforward sequential memory networks (FSMN), provides a non-recurrent architecture to model long term dependency in sequential data and has achieved better performance over RNNs on acoustic modeling and language modeling tasks. In this work, we propose a compact feedforward sequential memory networks (cFSMN) by combining FSMN with low-rank matrix factorization. We also make a slight modification to the encoding method used in FSMNs in order to further simplify the network architecture. On the Switchboard task, the proposed new cFSMN structures can reduce the model size by 60% and speed up the learning by more than 7 times while the models still significantly outperform the popular bidirection LSTMs for both frame-level cross-entropy (CE) criterion based training and MMI based sequence training.",
"title": ""
},
{
"docid": "neg:1840181_12",
"text": "We present a novel end-to-end trainable neural network model for task-oriented dialog systems. The model is able to track dialog state, issue API calls to knowledge base (KB), and incorporate structured KB query results into system responses to successfully complete task-oriented dialogs. The proposed model produces well-structured system responses by jointly learning belief tracking and KB result processing conditioning on the dialog history. We evaluate the model in a restaurant search domain using a dataset that is converted from the second Dialog State Tracking Challenge (DSTC2) corpus. Experiment results show that the proposed model can robustly track dialog state given the dialog history. Moreover, our model demonstrates promising results in producing appropriate system responses, outperforming prior end-to-end trainable neural network models using per-response accuracy evaluation metrics.",
"title": ""
},
{
"docid": "neg:1840181_13",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "neg:1840181_14",
"text": "In the past few years, mobile augmented reality (AR) has attracted a great deal of attention. It presents us a live, direct or indirect view of a real-world environment whose elements are augmented (or supplemented) by computer-generated sensory inputs such as sound, video, graphics or GPS data. Also, deep learning has the potential to improve the performance of current AR systems. In this paper, we propose a distributed mobile logo detection framework. Our system consists of mobile AR devices and a back-end server. Mobile AR devices can capture real-time videos and locally decide which frame should be sent to the back-end server for logo detection. The server schedules all detection jobs to minimise the maximum latency. We implement our system on the Google Nexus 5 and a desktop with a wireless network interface. Evaluation results show that our system can detect the view change activity with an accuracy of 95:7% and successfully process 40 image processing jobs before deadline. ARTICLE HISTORY Received 6 June 2018 Accepted 30 June 2018",
"title": ""
},
{
"docid": "neg:1840181_15",
"text": "The CyberDesk project is aimed at providing a software architecture that dynamically integrates software modules. This integration is driven by a user’s context, where context includes the user’s physical, social, emotional, and mental (focus-of-attention) environments. While a user’s context changes in all settings, it tends to change most frequently in a mobile setting. We have used the CyberDesk ystem in a desktop setting and are currently using it to build an intelligent home nvironment.",
"title": ""
},
{
"docid": "neg:1840181_16",
"text": "Transportation research relies heavily on a variety of data. From sensors to surveys, data supports dayto-day operations as well as long-term planning and decision-making. The challenges that arise due to the volume and variety of data that are found in transportation research can be effectively addressed by ontologies. This opportunity has already been recognized – there are a number of existing transportation ontologies, however the relationship between them is unclear. The goal of this work is to provide an overview of the opportunities for ontologies in transportation research and operation, and to present a survey of existing transportation ontologies to serve two purposes: (1) to provide a resource for the transportation research community to aid in understanding (and potentially selecting between) existing transportation ontologies; and (2) to identify future work for the development of transportation ontologies, by identifying areas that may be lacking.",
"title": ""
},
{
"docid": "neg:1840181_17",
"text": "Study of the forecasting models using large scale microblog discussions and the search behavior data can provide a good insight for better understanding the market movements. In this work we collected a dataset of 2 million tweets and search volume index (SVI from Google) for a period of June 2010 to September 2011. We model a set of comprehensive causative relationships over this dataset for various market securities like equity (Dow Jones Industrial Average-DJIA and NASDAQ-100), commodity markets (oil and gold) and Euro Forex rates. We also investigate the lagged and statistically causative relations of Twitter sentiments developed during active trading days and market inactive days in combination with the search behavior of public before any change in the prices/indices. Our results show extent of lagged significance with high correlation value upto 0.82 between search volumes and gold price in USD. We find weekly accuracy in direction (up and down prediction) uptil 94.3% for DJIA and 90% for NASDAQ-100 with significant reduction in mean average percentage error for all the forecasting models.",
"title": ""
},
{
"docid": "neg:1840181_18",
"text": "In recent years, the improvement of wireless protocols, the development of cloud services and the lower cost of hardware have started a new era for smart homes. One such enabling technologies is fog computing, which extends cloud computing to the edge of a network allowing for developing novel Internet of Things (IoT) applications and services. Under the IoT fog computing paradigm, IoT gateways are usually utilized to exchange messages with IoT nodes and a cloud. WiFi and ZigBee stand out as preferred communication technologies for smart homes. WiFi has become very popular, but it has a limited application due to its high energy consumption and the lack of standard mesh networking capabilities for low-power devices. For such reasons, ZigBee was selected by many manufacturers for developing wireless home automation devices. As a consequence, these technologies may coexist in the 2.4 GHz band, which leads to collisions, lower speed rates and increased communications latencies. This article presents ZiWi, a distributed fog computing Home Automation System (HAS) that allows for carrying out seamless communications among ZigBee and WiFi devices. This approach diverges from traditional home automation systems, which often rely on expensive central controllers. In addition, to ease the platform's building process, whenever possible, the system makes use of open-source software (all the code of the nodes is available on GitHub) and Commercial Off-The-Shelf (COTS) hardware. The initial results, which were obtained in a number of representative home scenarios, show that the developed fog services respond several times faster than the evaluated cloud services, and that cross-interference has to be taken seriously to prevent collisions. In addition, the current consumption of ZiWi's nodes was measured, showing the impact of encryption mechanisms.",
"title": ""
},
{
"docid": "neg:1840181_19",
"text": "The adoption of Course Management Systems (CMSs) for web-based instruction continues to increase in today’s higher education. A CMS is a software program or integrated platform that contains a series of web-based tools to support a number of activities and course management procedures (Severson, 2004). Examples of Course Management Systems are Blackboard, WebCT, eCollege, Moodle, Desire2Learn, Angel, etc. An argument for the adoption of elearning environments using CMSs is the flexibility of such environments when reaching out to potential learners in remote areas where brick and mortar institutions are non-existent. It is also believed that e-learning environments can have potential added learning benefits and can improve students’ and educators’ self-regulation skills, in particular their metacognitive skills. In spite of this potential to improve learning by means of using a CMS for the delivery of e-learning, the features and functionalities that have been built into these systems are often underutilized. As a consequence, the created learning environments in CMSs do not adequately scaffold learners to improve their selfregulation skills. In order to support the improvement of both the learners’ subject matter knowledge and learning strategy application, the e-learning environments within CMSs should be designed to address learners’ diversity in terms of learning styles, prior knowledge, culture, and self-regulation skills. Self-regulative learners are learners who can demonstrate ‘personal initiative, perseverance and adaptive skill in pursuing learning’ (Zimmerman, 2002). Self-regulation requires adequate monitoring strategies and metacognitive skills. The created e-learning environments should encourage the application of learners’ metacognitive skills by prompting learners to plan, attend to relevant content, and monitor and evaluate their learning. This position paper sets out to inform policy makers, educators, researchers, and others of the importance of a metacognitive e-learning approach when designing instruction using Course Management Systems. Such a metacognitive approach will improve the utilization of CMSs to support learners on their path to self-regulation. We argue that a powerful CMS incorporates features and functionalities that can provide extensive scaffolding to learners and support them in becoming self-regulated learners. Finally, we believe that extensive training and support is essential if educators are expected to develop and implement CMSs as powerful learning tools.",
"title": ""
}
] |
1840182 | OPTIMIZATION OF A WAVE CANCELLATION MULTIHULL SHIP USING CFD TOOLS | [
{
"docid": "pos:1840182_0",
"text": "Four methods of analysis | a nonlinear method based on Euler's equations and three linear potential ow methods | are used to determine the optimal location of the outer hulls for a wave cancellation multihull ship that consists of a main center hull and two outer hulls. The three potential ow methods correspond to a hierarchy of simple approximations based on the Fourier-Kochin representation of ship waves and the slender-ship approximation.",
"title": ""
}
] | [
{
"docid": "neg:1840182_0",
"text": "Irony is an important device in human communication, both in everyday spoken conversations as well as in written texts including books, websites, chats, reviews, and Twitter messages among others. Specific cases of irony and sarcasm have been studied in different contexts but, to the best of our knowledge, only recently the first publicly available corpus including annotations about whether a text is ironic or not has been published by Filatova (2012). However, no baseline for classification of ironic or sarcastic reviews has been provided. With this paper, we aim at closing this gap. We formulate the problem as a supervised classification task and evaluate different classifiers, reaching an F1-measure of up to 74 % using logistic regression. We analyze the impact of a number of features which have been proposed in previous research as well as combinations of them.",
"title": ""
},
{
"docid": "neg:1840182_1",
"text": "0167-8655/$ see front matter 2012 Elsevier B.V. A http://dx.doi.org/10.1016/j.patrec.2012.06.003 ⇑ Corresponding author. E-mail addresses: fred.qi@ieee.org (F. Qi), gmshi@x 1 Principal corresponding author. Depth acquisition becomes inexpensive after the revolutionary invention of Kinect. For computer vision applications, depth maps captured by Kinect require additional processing to fill up missing parts. However, conventional inpainting methods for color images cannot be applied directly to depth maps as there are not enough cues to make accurate inference about scene structures. In this paper, we propose a novel fusion based inpainting method to improve depth maps. The proposed fusion strategy integrates conventional inpainting with the recently developed non-local filtering scheme. The good balance between depth and color information guarantees an accurate inpainting result. Experimental results show the mean absolute error of the proposed method is about 20 mm, which is comparable to the precision of the Kinect sensor. 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840182_2",
"text": "Supervised learning using deep convolutional neural network has shown its promise in large-scale image classification task. As a building block, it is now well positioned to be part of a larger system that tackles real-life multimedia tasks. An unresolved issue is that such model is trained on a static snapshot of data. Instead, this paper positions the training as a continuous learning process as new classes of data arrive. A system with such capability is useful in practical scenarios, as it gradually expands its capacity to predict increasing number of new classes. It is also our attempt to address the more fundamental issue: a good learning system must deal with new knowledge that it is exposed to, much as how human do.\n We developed a training algorithm that grows a network not only incrementally but also hierarchically. Classes are grouped according to similarities, and self-organized into levels. The newly added capacities are divided into component models that predict coarse-grained superclasses and those return final prediction within a superclass. Importantly, all models are cloned from existing ones and can be trained in parallel. These models inherit features from existing ones and thus further speed up the learning. Our experiment points out advantages of this approach, and also yields a few important open questions.",
"title": ""
},
{
"docid": "neg:1840182_3",
"text": "Linear support vector machines (svms) have become popular for solving classification tasks due to their fast and simple online application to large scale data sets. However, many problems are not linearly separable. For these problems kernel-based svms are often used, but unlike their linear variant they suffer from various drawbacks in terms of computational and memory efficiency. Their response can be represented only as a function of the set of support vectors, which has been experimentally shown to grow linearly with the size of the training set. In this paper we propose a novel locally linear svm classifier with smooth decision boundary and bounded curvature. We show how the functions defining the classifier can be approximated using local codings and show how this model can be optimized in an online fashion by performing stochastic gradient descent with the same convergence guarantees as standard gradient descent method for linear svm. Our method achieves comparable performance to the state-of-the-art whilst being significantly faster than competing kernel svms. We generalise this model to locally finite dimensional kernel svm.",
"title": ""
},
{
"docid": "neg:1840182_4",
"text": "Numerous studies have shown that datacenter computers rarely operate at full utilization, leading to a number of proposals for creating servers that are energy proportional with respect to the computation that they are performing.\n In this paper, we show that as servers themselves become more energy proportional, the datacenter network can become a significant fraction (up to 50%) of cluster power. In this paper we propose several ways to design a high-performance datacenter network whose power consumption is more proportional to the amount of traffic it is moving -- that is, we propose energy proportional datacenter networks.\n We first show that a flattened butterfly topology itself is inherently more power efficient than the other commonly proposed topology for high-performance datacenter networks. We then exploit the characteristics of modern plesiochronous links to adjust their power and performance envelopes dynamically. Using a network simulator, driven by both synthetic workloads and production datacenter traces, we characterize and understand design tradeoffs, and demonstrate an 85% reduction in power --- which approaches the ideal energy-proportionality of the network.\n Our results also demonstrate two challenges for the designers of future network switches: 1) We show that there is a significant power advantage to having independent control of each unidirectional channel comprising a network link, since many traffic patterns show very asymmetric use, and 2) system designers should work to optimize the high-speed channel designs to be more energy efficient by choosing optimal data rate and equalization technology. Given these assumptions, we demonstrate that energy proportional datacenter communication is indeed possible.",
"title": ""
},
{
"docid": "neg:1840182_5",
"text": "The design and implementation of a novel frequency synthesizer based on low phase-noise digital dividers and a direct digital synthesizer is presented. The synthesis produces two low noise accurate tunable signals at 10 and 100 MHz. We report the measured residual phase noise and frequency stability of the syn thesizer and estimate the total frequency stability, which can be expected from the synthesizer seeded with a signal near 11.2 GHz from an ultra-stable cryocooled sapphire oscillator (cryoCSO). The synthesizer residual single-sideband phase noise, at 1-Hz offset, on 10and 100-MHz signals was -135 and -130 dBc/Hz, respectively. The frequency stability contributions of these two sig nals was σ<sub>y</sub> = 9 × 10<sup>-15</sup> and σ<sub>y</sub> = 2.2 × 10<sup>-15</sup>, respectively, at 1-s integration time. The Allan deviation of the total fractional frequency noise on the 10- and 100-MHz signals derived from the synthesizer with the cry oCSO may be estimated, respectively, as σ<sub>y</sub> ≈ 3.6 × 10<sup>-15</sup> τ<sup>-1/2</sup> + 4 × 10<sup>-16</sup> and σ<sub>y</sub> ≈ s 5.2 × 10<sup>-2</sup> × 10<sup>-16</sup> τ<sup>-1/2</sup> + 3 × 10<sup>-16</sup>, respectively, for 1 ≤ τ <; 10<sup>4</sup>s. We also calculate the coherence function (a figure of merit for very long baseline interferometry in radio astronomy) for observation frequencies of 100, 230, and 345 GHz, when using the cry oCSO and a hydrogen maser. The results show that the cryoCSO offers a significant advantage at frequencies above 100 GHz.",
"title": ""
},
{
"docid": "neg:1840182_6",
"text": "Textbook Question Answering (TQA) [1] is a newly proposed task to answer arbitrary questions in middle school curricula, which has particular challenges to understand the long essays in additional to the images. Bilinear models [2], [3] are effective at learning high-level associations between questions and images, but are inefficient to handle the long essays. In this paper, we propose an Essay-anchor Attentive Multi-modal Bilinear pooling (EAMB), a novel method to encode the long essays into the joint space of the questions and images. The essay-anchors, embedded from the keywords, represent the essay information in a latent space. We propose a novel network architecture to pay special attention on the keywords in the questions, consequently encoding the essay information into the question features, and thus the joint space with the images. We then use the bilinear models to extract the multi-modal interactions to obtain the answers. EAMB successfully utilizes the redundancy of the pre-trained word embedding space to represent the essay-anchors. This avoids the extra learning difficulties from exploiting large network structures. Quantitative and qualitative experiments show the outperforming effects of EAMB on the TQA dataset.",
"title": ""
},
{
"docid": "neg:1840182_7",
"text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.",
"title": ""
},
{
"docid": "neg:1840182_8",
"text": "Wheel-spinning refers to a phenomenon in which a student has spent a considerable amount of time practicing a skill, yet displays little or no progress towards mastery. Wheel-spinning has been shown to be a common problem affecting a significant number of students in different tutoring systems and is negatively associated with learning. In this study, we construct a model of wheel-spinning, using generic features easily calculated from most tutoring systems. We show that for two different systems' data, the model generalizes to future students very well and can detect wheel-spinning in an early stage with high accuracy. We also refine the scope of the wheel-spinning problem in two systems using the model's predictions.",
"title": ""
},
{
"docid": "neg:1840182_9",
"text": "This essay addresses the question of how participatory design (PD) researchers and practitioners can pursue commitments to social justice and democracy while retaining commitments to reflective practice, the voices of the marginal, and design experiments “in the small.” I argue that contemporary feminist utopianism has, on its own terms, confronted similar issues, and I observe that it and PD pursue similar agendas, but with complementary strengths. I thus propose a cooperative engagement between feminist utopianism and PD at the levels of theory, methodology, and on-the-ground practice. I offer an analysis of a case—an urban renewal project in Taipei, Taiwan—as a means of exploring what such a cooperative engagement might entail. I argue that feminist utopianism and PD have complementary strengths that could be united to develop and to propose alternative futures that reflect democratic values and procedures, emerging technologies and infrastructures as design materials, a commitment to marginalized voices (and the bodies that speak them), and an ambitious, even literary, imagination.",
"title": ""
},
{
"docid": "neg:1840182_10",
"text": "Single exon genes (SEG) are archetypical of prokaryotes. Hence, their presence in intron-rich, multi-cellular eukaryotic genomes is perplexing. Consequently, a study on SEG origin and evolution is important. Towards this goal, we took the first initiative of identifying and counting SEG in nine completely sequenced eukaryotic organisms--four of which are unicellular (E. cuniculi, S. cerevisiae, S. pombe, P. falciparum) and five of which are multi-cellular (C. elegans, A. thaliana, D. melanogaster, M. musculus, H. sapiens). This exercise enabled us to compare their proportion in unicellular and multi-cellular genomes. The comparison suggests that the SEG fraction decreases with gene count (r = -0.80) and increases with gene density (r = 0.88) in these genomes. We also examined the distribution patterns of their protein lengths in different genomes.",
"title": ""
},
{
"docid": "neg:1840182_11",
"text": "We have developed a new mutual information-based registration method for matching unlabeled point features. In contrast to earlier mutual information-based registration methods, which estimate the mutual information using image intensity information, our approach uses the point feature location information. A novel aspect of our approach is the emergence of correspondence (between the two sets of features) as a natural by-product of joint density estimation. We have applied this algorithm to the problem of geometric alignment of primate autoradiographs. We also present preliminary results on three-dimensional robust matching of sulci derived from anatomical magnetic resonance images. Finally, we present an experimental comparison between the mutual information approach and other recent approaches which explicitly parameterize feature correspondence.",
"title": ""
},
{
"docid": "neg:1840182_12",
"text": "Relational databases are queried using database query languages such as SQL. Natural language interfaces to databases (NLIDB) are systems that translate a natural language sentence into a database query. In this modern techno-crazy world, as more and more laymen access various systems and applications through their smart phones and tablets, the need for Natural Language Interfaces (NLIs) has increased manifold. The challenges in Natural language Query processing are interpreting the sentence correctly, removal of various ambiguity and mapping to the appropriate context. Natural language access problem is actually composed of two stages Linguistic processing and Database processing. NLIDB techniques encompass a wide variety of approaches. The approaches include traditional methods such as Pattern Matching, Syntactic Parsing and Semantic Grammar to modern systems such as Intermediate Query Generation, Machine Learning and Ontologies. In this report, various approaches to build NLIDB systems have been analyzed and compared along with their advantages, disadvantages and application areas. Also, a natural language interface to a flight reservation system has been implemented comprising of flight and booking inquiry systems.",
"title": ""
},
{
"docid": "neg:1840182_13",
"text": "We present Jabberwocky, a social computing stack that consists of three components: a human and machine resource management system called Dormouse, a parallel programming framework for human and machine computation called ManReduce, and a high-level programming language on top of ManReduce called Dog. Dormouse is designed to enable cross-platform programming languages for social computation, so, for example, programs written for Mechanical Turk can also run on other crowdsourcing platforms. Dormouse also enables a programmer to easily combine crowdsourcing platforms or create new ones. Further, machines and people are both first-class citizens in Dormouse, allowing for natural parallelization and control flows for a broad range of data-intensive applications. And finally and importantly, Dormouse includes notions of real identity, heterogeneity, and social structure. We show that the unique properties of Dormouse enable elegant programming models for complex and useful problems, and we propose two such frameworks. ManReduce is a framework for combining human and machine computation into an intuitive parallel data flow that goes beyond existing frameworks in several important ways, such as enabling functions on arbitrary communication graphs between human and machine clusters. And Dog is a high-level procedural language written on top of ManReduce that focuses on expressivity and reuse. We explore two applications written in Dog: bootstrapping product recommendations without purchase data, and expert labeling of medical images.",
"title": ""
},
{
"docid": "neg:1840182_14",
"text": "The emergence and spread of antibiotic resistance among pathogenic bacteria has been a rising problem for public health in recent decades. It is becoming increasingly recognized that not only antibiotic resistance genes (ARGs) encountered in clinical pathogens are of relevance, but rather, all pathogenic, commensal as well as environmental bacteria-and also mobile genetic elements and bacteriophages-form a reservoir of ARGs (the resistome) from which pathogenic bacteria can acquire resistance via horizontal gene transfer (HGT). HGT has caused antibiotic resistance to spread from commensal and environmental species to pathogenic ones, as has been shown for some clinically important ARGs. Of the three canonical mechanisms of HGT, conjugation is thought to have the greatest influence on the dissemination of ARGs. While transformation and transduction are deemed less important, recent discoveries suggest their role may be larger than previously thought. Understanding the extent of the resistome and how its mobilization to pathogenic bacteria takes place is essential for efforts to control the dissemination of these genes. Here, we will discuss the concept of the resistome, provide examples of HGT of clinically relevant ARGs and present an overview of the current knowledge of the contributions the various HGT mechanisms make to the spread of antibiotic resistance.",
"title": ""
},
{
"docid": "neg:1840182_15",
"text": "The objective of the present study was to investigate whether transpedicular bone grafting as a supplement to posterior pedicle screw fixation in thoracolumbar fractures results in a stable reconstruction of the anterior column, that allows healing of the fracture without loss of correction. Posterior instrumentation using an internal fixator is a standard procedure for stabilizing the injured thoracolumbar spine. Transpedicular bone grafting was first described by Daniaux in 1986 to achieve intrabody fusion. Pedicle screw fixation with additional transpedicular fusion has remained controversial because of inconsistent reports. A retrospective single surgeon cohort study was performed. Between October 2001 and May 2007, 30 consecutive patients with 31 acute traumatic burst fractures of the thoracolumbar spine (D12-L5) were treated operatively. The mean age of the patients was 45.7 years (range: 19-78). There were 23 men and 7 women. Nineteen thoracolumbar fractures were sustained in falls from a height; the other fractures were the result of motor vehicle accidents. The vertebrae most often involved were L1 in 13 patients and L2 in 8 patients. According to the Magerl classification, 25 patients sustained Type A1, 4 Type A2 and 2 Type A3 fractures. The mean time from injury to surgery was 6 days (range 2-14 days). Two postoperative complications were observed: one superficial and one deep infection. Mean Cobb's angle improved from +7.16 degrees (SD 12.44) preoperatively to -5.48 degrees (SD 11.44) immediately after operation, with a mean loss of correction of 1.00 degrees (SD 3.04) at two years. Reconstruction of the anterior column is important to prevent loss of correction. In our experience, the use of transpedicular bone grafting has efficiently restored the anterior column and has preserved the post-operative correction of kyphosis until healing of the fracture.",
"title": ""
},
{
"docid": "neg:1840182_16",
"text": "Recurrent neural networks are a widely used class of neural architectures. They have, however, two shortcomings. First, it is difficult to understand what exactly they learn. Second, they tend to work poorly on sequences requiring long-term memorization, despite having this capacity in principle. We aim to address both shortcomings with a class of recurrent networks that use a stochastic state transition mechanism between cell applications. This mechanism, which we term stateregularization, makes RNNs transition between a finite set of learnable states. We evaluate stateregularized RNNs on (1) regular languages for the purpose of automata extraction; (2) nonregular languages such as balanced parentheses, palindromes, and the copy task where external memory is required; and (3) real-word sequence learning tasks for sentiment analysis, visual object recognition, and language modeling. We show that state-regularization (a) simplifies the extraction of finite state automata modeling an RNN’s state transition dynamics; (b) forces RNNs to operate more like automata with external memory and less like finite state machines; (c) makes RNNs have better interpretability and explainability.",
"title": ""
},
{
"docid": "neg:1840182_17",
"text": "Carnegie Mellon University has proposed an educationaland entertainment-based robotic lunar mission which will last two years and cover 1000km on the moon and revisit several historic sites. With the transmission of live panoramic video, participants will be provided the opportunity for interactively exploring the moon through teleoperation and telepresence. The requirement of panoramic video and telepresence demands high data rates on the order of 7.5 Mbps. This is challenging since the power available for communication is approximately 100W and occupied bandwidth is limited to less than 10 MHz. The tough environment on the moon introduces additional challenges of survivability and reliability. A communication system based on a phased array antenna, Nyquist QPSK modulation and a rate 2/3 Turbo code is presented which can satisfy requirements of continuous high data rate communication at low power and bandwidth reliably over a two year mission duration. Three ground stations with 22m parabolic antennas are required around the world to maintain continuous communication. The transmission will then be relayed via satellite to the current control station location. This paper presents an overview of the mission, and communication requirements and design.",
"title": ""
},
{
"docid": "neg:1840182_18",
"text": "Complex event processing has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. The goal is to extract patterns from such event streams in order to make informed decisions in real-time. However, networking latencies and even machine failure may cause events to arrive out-of-order at the event stream processing engine. In this work, we address the problem of processing event pattern queries specified over event streams that may contain out-of-order data. First, we analyze the problems state-of-the-art event stream processing technology would experience when faced with out-of-order data arrival. We then propose a new solution of physical implementation strategies for the core stream algebra operators such as sequence scan and pattern construction, including stack- based data structures and associated purge algorithms. Optimizations for sequence scan and construction as well as state purging to minimize CPU cost and memory consumption are also introduced. Lastly, we conduct an experimental study demonstrating the effectiveness of our approach.",
"title": ""
}
] |
1840183 | Analysis of Permission-based Security in Android through Policy Expert, Developer, and End User Perspectives | [
{
"docid": "pos:1840183_0",
"text": "Each time a user installs an application on their Android phone they are presented with a full screen of information describing what access they will be granting that application. This information is intended to help them make two choices: whether or not they trust that the application will not damage the security of their device and whether or not they are willing to share their information with the application, developer, and partners in question. We performed a series of semi-structured interviews in two cities to determine whether people read and understand these permissions screens, and to better understand how people perceive the implications of these decisions. We find that the permissions displays are generally viewed and read, but not understood by Android users. Alarmingly, we find that people are unaware of the security risks associated with mobile apps and believe that app marketplaces test and reject applications. In sum, users are not currently well prepared to make informed privacy and security decisions around installing applications.",
"title": ""
}
] | [
{
"docid": "neg:1840183_0",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "neg:1840183_1",
"text": "Human visual system (HVS) can perceive constant color under varying illumination conditions while digital images record information of both reflectance (physical color) of objects and illumination. Retinex theory, formulated by Edwin H. Land, aimed to simulate and explain this feature of HVS. However, to recover the reflectance from a given image is in general an ill-posed problem. In this paper, we establish an L1-based variational model for Retinex theory that can be solved by a fast computational approach based on Bregman iteration. Compared with previous works, our L1-Retinex method is more accurate for recovering the reflectance, which is illustrated by examples and statistics. In medical images such as magnetic resonance imaging (MRI), intensity inhomogeneity is often encountered due to bias fields. This is a similar formulation to Retinex theory while the MRI has some specific properties. We then modify the L1-Retinex method and develop a new algorithm for MRI data. We demonstrate the performance of our method by comparison with previous work on simulated and real data.",
"title": ""
},
{
"docid": "neg:1840183_2",
"text": "The study examined the impact of advertising on building brand equity in Zimbabwe’s Tobacco Auction floors. In this study, 100 farmers were selected from 88 244 farmers registered in the four tobacco growing regions of country. A structured questionnaire was used as a tool to collect primary data. A pilot survey with 20 participants was initially conducted to test the reliability of the questionnaire. Results of the pilot study were analysed to test for reliability using SPSS.Results of the study found that advertising affects brand awareness, brand loyalty, brand association and perceived quality. 55% of the respondents agreed that advertising changed their perceived quality on auction floors. A linear regression analysis was performed to predict brand quality as a function of the type of farmer, source of information, competitive average pricing, loyalty, input assistance, service delivery, number of floors, advert mode, customer service, floor reputation and attitude. There was a strong relationship between brand quality and the independent variables as depicted by the regression coefficient of 0.885 and the model fit is perfect at 78.3%. From the ANOVA tables, a good fit was established between advertising and brand equity with p=0.001 which is less than the significance level of 0.05. While previous researches concentrated on the elements of brand equity as suggested by Keller’s brand equity model, this research has managed to extend the body of knowledge on brand equity by exploring the role of advertising. Future research should assess the relationship between advertising and a brand association.",
"title": ""
},
{
"docid": "neg:1840183_3",
"text": "In this paper, we present a new and significant theoretical discovery. If the absolute height difference between base station (BS) antenna and user equipment (UE) antenna is larger than zero, then the network capacity performance in terms of the area spectral efficiency (ASE) will continuously decrease as the BS density increases for ultra-dense (UD) small cell networks (SCNs). This performance behavior has a tremendous impact on the deployment of UD SCNs in the 5th- generation (5G) era. Network operators may invest large amounts of money in deploying more network infrastructure to only obtain an even worse network performance. Our study results reveal that it is a must to lower the SCN BS antenna height to the UE antenna height to fully achieve the capacity gains of UD SCNs in 5G. However, this requires a revolutionized approach of BS architecture and deployment, which is explored in this paper too.",
"title": ""
},
{
"docid": "neg:1840183_4",
"text": "This paper presents a new approach for learning in structured domains (SDs) using a constructive neural network for graphs (NN4G). The new model allows the extension of the input domain for supervised neural networks to a general class of graphs including both acyclic/cyclic, directed/undirected labeled graphs. In particular, the model can realize adaptive contextual transductions, learning the mapping from graphs for both classification and regression tasks. In contrast to previous neural networks for structures that had a recursive dynamics, NN4G is based on a constructive feedforward architecture with state variables that uses neurons with no feedback connections. The neurons are applied to the input graphs by a general traversal process that relaxes the constraints of previous approaches derived by the causality assumption over hierarchical input data. Moreover, the incremental approach eliminates the need to introduce cyclic dependencies in the definition of the system state variables. In the traversal process, the NN4G units exploit (local) contextual information of the graphs vertices. In spite of the simplicity of the approach, we show that, through the compositionality of the contextual information developed by the learning, the model can deal with contextual information that is incrementally extended according to the graphs topology. The effectiveness and the generality of the new approach are investigated by analyzing its theoretical properties and providing experimental results.",
"title": ""
},
{
"docid": "neg:1840183_5",
"text": "Program autotuning has been shown to achieve better or more portable performance in a number of domains. However, autotuners themselves are rarely portable between projects, for a number of reasons: using a domain-informed search space representation is critical to achieving good results; search spaces can be intractably large and require advanced machine learning techniques; and the landscape of search spaces can vary greatly between different problems, sometimes requiring domain specific search techniques to explore efficiently.\n This paper introduces OpenTuner, a new open source framework for building domain-specific multi-objective program autotuners. OpenTuner supports fully-customizable configuration representations, an extensible technique representation to allow for domain-specific techniques, and an easy to use interface for communicating with the program to be autotuned. A key capability inside OpenTuner is the use of ensembles of disparate search techniques simultaneously; techniques that perform well will dynamically be allocated a larger proportion of tests. We demonstrate the efficacy and generality of OpenTuner by building autotuners for 7 distinct projects and 16 total benchmarks, showing speedups over prior techniques of these projects of up to 2.8x with little programmer effort.",
"title": ""
},
{
"docid": "neg:1840183_6",
"text": "EEG signals, measuring transient brain activities, can be used as a source of biometric information with potential application in high-security person recognition scenarios. However, due to the inherent nature of these signals and the process used for their acquisition, their effective preprocessing is critical for their successful utilisation. In this paper we compare the effectiveness of different wavelet-based noise removal methods and propose an EEG-based biometric identification system which combines two such de-noising methods to enhance the signal preprocessing stage. In tests using 50 subjects from a public database, the proposed new approach is shown to provide improved identification performance over alternative techniques. Another important preprocessing consideration is the segmentation of the EEG record prior to de-noising. Different segmentation approaches were investigated and the trade-off between performance and computation time is explored. Finally the paper reports on the impact of the choice of wavelet function used for feature extraction on system performance.",
"title": ""
},
{
"docid": "neg:1840183_7",
"text": "This paper presents EsdRank, a new technique for improving ranking using external semi-structured data such as controlled vocabularies and knowledge bases. EsdRank treats vocabularies, terms and entities from external data, as objects connecting query and documents. Evidence used to link query to objects, and to rank documents are incorporated as features between query-object and object-document correspondingly. A latent listwise learning to rank algorithm, Latent-ListMLE, models the objects as latent space between query and documents, and learns how to handle all evidence in a unified procedure from document relevance judgments. EsdRank is tested in two scenarios: Using a knowledge base for web search, and using a controlled vocabulary for medical search. Experiments on TREC Web Track and OHSUMED data show significant improvements over state-of-the-art baselines.",
"title": ""
},
{
"docid": "neg:1840183_8",
"text": "We propose a general purpose variational inference algorithm that forms a natural counterpart of gradient descent for optimization. Our method iteratively transports a set of particles to match the target distribution, by applying a form of functional gradient descent that minimizes the KL divergence. Empirical studies are performed on various real world models and datasets, on which our method is competitive with existing state-of-the-art methods. The derivation of our method is based on a new theoretical result that connects the derivative of KL divergence under smooth transforms with Stein’s identity and a recently proposed kernelized Stein discrepancy, which is of independent interest.",
"title": ""
},
{
"docid": "neg:1840183_9",
"text": "We introduce the Self-Adaptive Goal Generation Robust Intelligent Adaptive Curiosity (SAGG-RIAC) architecture as an intrinsically motivated goal exploration mechanism which allows active learning of inverse models in high-dimensional redundant robots. This allows a robot to efficiently and actively learn distributions of parameterized motor skills/policies that solve a corresponding distribution of parameterized tasks/goals. The architecture makes the robot sample actively novel parameterized tasks in the task space, based on a measure of competence progress, each of which triggers low-level goal-directed learning of the motor policy parameters that allow to solve it. For both learning and generalization, the system leverages regression techniques which allow to infer the motor policy parameters corresponding to a given novel parameterized task, and based on the previously learnt correspondences between policy and task parameters. We present experiments with high-dimensional continuous sensorimotor spaces in three different robotic setups: 1) learning the inverse kinematics in a highly-redundant robotic arm, 2) learning omnidirectional locomotion with motor primitives in a quadruped robot, 3) an arm learning to control a fishing rod with a flexible wire. We show that 1) exploration in the task space can be a lot faster than exploration in the actuator space for learning inverse models in redundant robots; 2) selecting goals maximizing competence progress creates developmental trajectories driving the robot to progressively focus on tasks of increasing complexity and is statistically significantly more efficient than selecting tasks randomly, as well as more efficient than different standard active motor babbling methods; 3) this architecture allows the robot to actively discover which parts of its task space it can learn to reach and which part it cannot.",
"title": ""
},
{
"docid": "neg:1840183_10",
"text": "Mobile devices have become an important part of our everyday life, harvesting more and more confidential user information. Their portable nature and the great exposure to security attacks, however, call out for stronger authentication mechanisms than simple password-based identification. Biometric authentication techniques have shown potential in this context. Unfortunately, prior approaches are either excessively prone to forgery or have too low accuracy to foster widespread adoption. In this paper, we propose sensor-enhanced keystroke dynamics, a new biometric mechanism to authenticate users typing on mobile devices. The key idea is to characterize the typing behavior of the user via unique sensor features and rely on standard machine learning techniques to perform user authentication. To demonstrate the effectiveness of our approach, we implemented an Android prototype system termed Unagi. Our implementation supports several feature extraction and detection algorithms for evaluation and comparison purposes. Experimental results demonstrate that sensor-enhanced keystroke dynamics can improve the accuracy of recent gestured-based authentication mechanisms (i.e., EER>0.5%) by one order of magnitude, and the accuracy of traditional keystroke dynamics (i.e., EER>7%) by two orders of magnitude.",
"title": ""
},
{
"docid": "neg:1840183_11",
"text": "In this paper, we are interested in two seemingly different concepts: adversarial training and generative adversarial networks (GANs). Particularly, how these techniques help to improve each other. To this end, we analyze the limitation of adversarial training as the defense method, starting from questioning how well the robustness of a model can generalize. Then, we successfully improve the generalizability via data augmentation by the “fake” images sampled from generative adversarial network. After that, we are surprised to see that the resulting robust classifier leads to a better generator, for free. We intuitively explain this interesting phenomenon and leave the theoretical analysis for future work. Motivated by these observations, we propose a system that combines generator, discriminator, and adversarial attacker in a single network. After end-to-end training and fine tuning, our method can simultaneously improve the robustness of classifiers, measured by accuracy under strong adversarial attacks, and the quality of generators, evaluated both aesthetically and quantitatively. In terms of the classifier, we achieve better robustness than the state-of-the-art adversarial training algorithm proposed in (Madry etla., 2017), while our generator achieves competitive performance compared with SN-GAN (Miyato and Koyama, 2018). Source code is publicly available online at https://github.com/anonymous.",
"title": ""
},
{
"docid": "neg:1840183_12",
"text": "Reinforced concrete walls are commonly used as the primary lateral force-resisting system for tall buildings. As the tools for conducting nonlinear response history analysis have improved and with the advent of performancebased seismic design, reinforced concrete walls and core walls are often employed as the only lateral forceresisting system. Proper modelling of the load versus deformation behaviour of reinforced concrete walls and link beams is essential to accurately predict important response quantities. Given this critical need, an overview of modelling approaches appropriate to capture the lateral load responses of both slender and stout reinforced concrete walls, as well as link beams, is presented. Modelling of both fl exural and shear responses is addressed, as well as the potential impact of coupled fl exure–shear behaviour. Model results are compared with experimental results to assess the ability of common modelling approaches to accurately predict both global and local experimental responses. Based on the fi ndings, specifi c recommendations are made for general modelling issues, limiting material strains for combined bending and axial load, and shear backbone relations. Copyright © 2007 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840183_13",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For an effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step toward this end by characterizing the operational performance of a tier-1 cellular network in the U.S. during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 s shorter radio resource control timeouts as compared with routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events, and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "neg:1840183_14",
"text": "Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich. However, it is still an open question whether humans are prone to similar mistakes. Here, we address this question by leveraging recent techniques that transfer adversarial examples from computer vision models with known parameters and architecture to other models with unknown parameters and architecture, and by matching the initial processing of the human visual system. We find that adversarial examples that strongly transfer across computer vision models influence the classifications made by time-limited human observers.",
"title": ""
},
{
"docid": "neg:1840183_15",
"text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.",
"title": ""
},
{
"docid": "neg:1840183_16",
"text": "Traffic light congestion normally occurs in urban areas where the number of vehicles is too many on the road. This problem drives the need for innovation and provide efficient solutions regardless this problem. Smart system that will monitor the congestion level at the traffic light will be a new option to replace the old system which is not practical anymore. Implementing internet of thinking (IoT) technology will provide the full advantage for monitoring and creating a congestion model based on sensor readings. Multiple sensor placements for each lane will give a huge advantage in detecting vehicle and increasing the accuracy in collecting data. To gather data from each sensor, the LoRaWAN technology is utilized where it features low power wide area network, low cost of implementation and the communication is secure bi-directional for the internet of thinking. The radio frequency used between end nodes to gateways range is estimated around 15-kilometer radius. A series of test is carried out to estimate the range of signal and it gives a positive result. The level of congestion for each lane will be displayed on Grafana dashboard and the algorithm can be calculated. This provides huge advantages to the implementation of this project, especially the scope of the project will be focus in urban areas where the level of congestion is bad.",
"title": ""
},
{
"docid": "neg:1840183_17",
"text": "The paper presents a system, Heart Track, which aims for automated ECG (Electrocardiogram) analysis. Different modules and algorithms which are proposed and used for implementing the system are discussed. The ECG is the recording of the electrical activity of the heart and represents the depolarization and repolarization of the heart muscle cells and the heart chambers. The electrical signals from the heart are measured non-invasively using skin electrodes and appropriate electronic measuring equipment. ECG is measured using 12 leads which are placed at specific positions on the body [2]. The required data is converted into ECG curve which possesses a characteristic pattern. Deflections from this normal ECG pattern can be used as a diagnostic tool in medicine in the detection of cardiac diseases. Diagnosis of large number of cardiac disorders can be predicted from the ECG waves wherein each component of the ECG wave is associated with one or the other disorder. This paper concentrates entirely on detection of Myocardial Infarction, hence only the related components (ST segment) of the ECG wave are analyzed.",
"title": ""
},
{
"docid": "neg:1840183_18",
"text": "Social networking sites occupy increasing fields of daily life and act as important communication channels today. But recent research also discusses the dark side of these sites, which expresses in form of stress, envy, addiction or even depression. Nevertheless, there must be a reason why people use social networking sites, even though they face related risks. One reason is human curiosity that tempts users to behave like this. The research on hand presents the impact of curiosity on user acceptance of social networking sites, which is theorized and empirically evaluated by using the technology acceptance model and a quantitative study among Facebook users. It further reveals that especially two types of human curiosity, epistemic and interpersonal curiosity, influence perceived usefulness and perceived enjoyment, and with it technology acceptance.",
"title": ""
},
{
"docid": "neg:1840183_19",
"text": "Cerebral gray-matter volume (GMV) decreases in normal aging but the extent of the decrease may be experience-dependent. Bilingualism may be one protective factor and in this article we examine its potential protective effect on GMV in a region that shows strong age-related decreases-the left anterior temporal pole. This region is held to function as a conceptual hub and might be expected to be a target of plastic changes in bilingual speakers because of the requirement for these speakers to store and differentiate lexical concepts in 2 languages to guide speech production and comprehension processes. In a whole brain comparison of bilingual speakers (n = 23) and monolingual speakers (n = 23), regressing out confounding factors, we find more extensive age-related decreases in GMV in the monolingual brain and significantly increased GMV in left temporal pole for bilingual speakers. Consistent with a specific neuroprotective effect of bilingualism, region of interest analyses showed a significant positive correlation between naming performance in the second language and GMV in this region. The effect appears to be bilateral though because there was a nonsignificantly different effect of naming performance on GMV in the right temporal pole. Our data emphasize the vulnerability of the temporal pole to normal aging and the value of bilingualism as both a general and specific protective factor to GMV decreases in healthy aging.",
"title": ""
}
] |
1840184 | Sex & the City . How Emotional Factors Affect Financial Choices | [
{
"docid": "pos:1840184_0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/ucpress.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "pos:1840184_1",
"text": "In the health sciences it is quite common to carry out studies designed to determine the influence of one or more variables upon a given response variable. When this response variable is numerical, simple or multiple regression techniques are used, depending on the case. If the response variable is a qualitative variable (dichotomic or polychotomic), as for example the presence or absence of a disease, linear regression methodology is not applicable, and simple or multinomial logistic regression is used, as applicable.",
"title": ""
}
] | [
{
"docid": "neg:1840184_0",
"text": "BACKGROUND AND OBJECTIVE\nMildronate, an inhibitor of carnitine-dependent metabolism, is considered to be an anti-ischemic drug. This study is designed to evaluate the efficacy and safety of mildronate injection in treating acute ischemic stroke.\n\n\nMETHODS\nWe performed a randomized, double-blind, multicenter clinical study of mildronate injection for treating acute cerebral infarction. 113 patients in the experimental group received mildronate injection, and 114 patients in the active-control group received cinepazide injection. In addition, both groups were given aspirin as a basic treatment. Modified Rankin Scale (mRS) score was performed at 2 weeks and 3 months after treatment. National Institutes of Health Stroke Scale (NIHSS) score and Barthel Index (BI) score were performed at 2 weeks after treatment, and then vital signs and adverse events were evaluated.\n\n\nRESULTS\nA total of 227 patients were randomized to treatment (n = 113, mildronate; n = 114, active-control). After 3 months, there was no significant difference for the primary endpoint between groups categorized in terms of mRS scores of 0-1 and 0-2 (p = 0.52 and p = 0.07, respectively). There were also no significant differences for the secondary endpoint between groups categorized in terms of NIHSS scores of >5 and >8 (p = 0.98 and p = 0.97, respectively) or BI scores of >75 and >95 (p = 0.49 and p = 0.47, respectively) at 15 days. The incidence of serious adverse events was similar between the two groups.\n\n\nCONCLUSION\nMildronate injection is as effective and safe as cinepazide injection in treating acute cerebral infarction.",
"title": ""
},
{
"docid": "neg:1840184_1",
"text": "This is the third in a series of four tutorial papers on biomedical signal processing and concerns the estimation of the power spectrum (PS) and coherence function (CF) od biomedical data. The PS is introduced and its estimation by means of the discrete Fourier transform is considered in terms of the problem of resolution in the frequency domain. The periodogram is introduced and its variance, bias and the effects of windowing and smoothing are considered. The use of the autocovariance function as a stage in power spectral estimation is described and the effects of windows in the autocorrelation domain are compared with the related effects of windows in the original time domain. The concept of coherence is introduced and the many ways in which coherence functions might be estimated are considered.",
"title": ""
},
{
"docid": "neg:1840184_2",
"text": "Today, as the increasing the amount of using internet, there are so most information interchanges are performed in that internet. So, the methods used as intrusion detective tools for protecting network systems against diverse attacks are became too important. The available of IDS are getting more powerful. Support Vector Machine was used as the classical pattern reorganization tools have been widely used for Intruder detections. There have some different characteristic of features in building an Intrusion Detection System. Conventional SVM do not concern about that. Our enhanced SVM Model proposed with an Recursive Feature Elimination (RFE) and kNearest Neighbor (KNN) method to perform a feature ranking and selection task of the new model. RFE can reduce redundant & recursive features and KNN can select more precisely than the conventional SVM. Experiments and comparisons are conducted through intrusion dataset: the KDD Cup 1999 dataset.",
"title": ""
},
{
"docid": "neg:1840184_3",
"text": "Abstract-The string-searching problem is to find all occurrences of pattern(s) in a text string. The Aho-Corasick string searching algorithm simultaneously finds all occurrences of multiple patterns in one pass through the text. On the other hand, the Boyer-Moore algorithm is understood to be the fastest algorithm for a single pattern. By combining the ideas of these two algorithms, we present an efficient string searching algorithm for multiple patterns. The algorithm runs in sublinear time, on the average, as the BM algorithm achieves, and its preprocessing time is linear proportional to the sum of the lengths of the patterns like the AC algorithm.",
"title": ""
},
{
"docid": "neg:1840184_4",
"text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.",
"title": ""
},
{
"docid": "neg:1840184_5",
"text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.",
"title": ""
},
{
"docid": "neg:1840184_6",
"text": "Driver yawning detection is one of the key technologies used in driver fatigue monitoring systems. Real-time driver yawning detection is a very challenging problem due to the dynamics in driver's movements and lighting conditions. In this paper, we present a yawning detection system that consists of a face detector, a nose detector, a nose tracker and a yawning detector. Deep learning algorithms are developed for detecting driver face area and nose location. A nose tracking algorithm that combines Kalman filter with a dedicated open-source TLD (Track-Learning-Detection) tracker is developed to generate robust tracking results under dynamic driving conditions. Finally a neural network is developed for yawning detection based on the features including nose tracking confidence value, gradient features around corners of mouth and face motion features. Experiments are conducted on real-world driving data, and results show that the deep convolutional networks can generate a satisfactory classification result for detecting driver's face and nose when compared with other pattern classification methods, and the proposed yawning detection system is effective in real-time detection of driver's yawning states.",
"title": ""
},
{
"docid": "neg:1840184_7",
"text": "We present DEC0DE, a system for recovering information from phones with unknown storage formats, a critical problem for forensic triage. Because phones have myriad custom hardware and software, we examine only the stored data. Via flexible descriptions of typical data structures, and using a classic dynamic programming algorithm, we are able to identify call logs and address book entries in phones across varied models and manufacturers. We designed DEC0DE by examining the formats of one set of phone models, and we evaluate its performance on other models. Overall, we are able to obtain high performance for these unexamined models: an average recall of 97% and precision of 80% for call logs; and average recall of 93% and precision of 52% for address books. Moreover, at the expense of recall dropping to 14%, we can increase precision of address book recovery to 94% by culling results that don’t match between call logs and address book entries on the same phone.",
"title": ""
},
{
"docid": "neg:1840184_8",
"text": "Massive MIMO, also known as very-large MIMO or large-scale antenna systems, is a new technique that potentially can offer large network capacities in multi-user scenarios. With a massive MIMO system, we consider the case where a base station equipped with a large number of antenna elements simultaneously serves multiple single-antenna users in the same time-frequency resource. So far, investigations are mostly based on theoretical channels with independent and identically distributed (i.i.d.) complex Gaussian coefficients, i.e., i.i.d. Rayleigh channels. Here, we investigate how massive MIMO performs in channels measured in real propagation environments. Channel measurements were performed at 2.6 GHz using a virtual uniform linear array (ULA), which has a physically large aperture, and a practical uniform cylindrical array (UCA), which is more compact in size, both having 128 antenna ports. Based on measurement data, we illustrate channel behavior of massive MIMO in three representative propagation conditions, and evaluate the corresponding performance. The investigation shows that the measured channels, for both array types, allow us to achieve performance close to that in i.i.d. Rayleigh channels. It is concluded that in real propagation environments we have characteristics that can allow for efficient use of massive MIMO, i.e., the theoretical advantages of this new technology can also be harvested in real channels.",
"title": ""
},
{
"docid": "neg:1840184_9",
"text": "Binary content-addressable memory (BiCAM) is a popular high speed search engine in hardware, which provides output typically in one clock cycle. But speed of CAM comes at the cost of various disadvantages, such as high latency, low storage density, and low architectural scalability. In addition, field-programmable gate arrays (FPGAs), which are used in many applications because of its advantages, do not have hard IPs for CAM. Since FPGAs have embedded IPs for random-access memories (RAMs), several RAM-based CAM architectures on FPGAs are available in the literature. However, these architectures are especially targeted for ternary CAMs, not for BiCAMs; thus, the available RAM-based CAMs may not be fully beneficial for BiCAMs in terms of architectural design. Since modern FPGAs are enriched with logical resources, why not to configure them to design BiCAM on FPGA? This letter presents a logic-based high performance BiCAM architecture (LH-CAM) using Xilinx FPGA. The proposed CAM is composed of CAM words and associated comparators. A sample of LH-CAM of size ${64\\times 36}$ is implemented on Xilinx Virtex-6 FPGA. Compared with the latest prior work, the proposed CAM is much simpler in architecture, storage efficient, reduces power consumption by 40.92%, and improves speed by 27.34%.",
"title": ""
},
{
"docid": "neg:1840184_10",
"text": "Knowledge graph completion aims to perform link prediction between entities. In this paper, we consider the approach of knowledge graph embeddings. Recently, models such as TransE and TransH build entity and relation embeddings by regarding a relation as translation from head entity to tail entity. We note that these models simply put both entities and relations within the same semantic space. In fact, an entity may have multiple aspects and various relations may focus on different aspects of entities, which makes a common space insufficient for modeling. In this paper, we propose TransR to build entity and relation embeddings in separate entity space and relation spaces. Afterwards, we learn embeddings by first projecting entities from entity space to corresponding relation space and then building translations between projected entities. In experiments, we evaluate our models on three tasks including link prediction, triple classification and relational fact extraction. Experimental results show significant and consistent improvements compared to stateof-the-art baselines including TransE and TransH. The source code of this paper can be obtained from https: //github.com/mrlyk423/relation extraction.",
"title": ""
},
{
"docid": "neg:1840184_11",
"text": "The class of materials combining high electrical or thermal conductivity, optical transparency and flexibility is crucial for the development of many future electronic and optoelectronic devices. Silver nanowire networks show very promising results and represent a viable alternative to the commonly used, scarce and brittle indium tin oxide. The science and technology research of such networks are reviewed to provide a better understanding of the physical and chemical properties of this nanowire-based material while opening attractive new applications.",
"title": ""
},
{
"docid": "neg:1840184_12",
"text": "Due to the explosive growth of wireless devices and wireless traffic, the spectrum scarcity problem is becoming more urgent in numerous Radio Frequency (RF) systems. At the same time, many studies have shown that spectrum resources allocated to various existing RF systems are largely underutilized. As a potential solution to this spectrum scarcity problem, spectrum sharing among multiple, potentially dissimilar RF systems has been proposed. However, such spectrum sharing solutions are challenging to develop due to the lack of efficient coordination schemes and potentially different PHY/MAC properties. In this paper, we investigate existing spectrum sharing methods facilitating coexistence of various RF systems. The cognitive radio technique, which has been the subject of various surveys, constitutes a subset of our wider scope. We study more general coexistence scenarios and methods such as coexistence of communication systems with similar priorities, utilizing similar or different protocols or standards, as well as the coexistence of communication and non-communication systems using the same spectral resources. Finally, we explore open research issues on the spectrum sharing methods as well as potential approaches to resolving these issues. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840184_13",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "neg:1840184_14",
"text": "Given the importance of relation or event extraction from biomedical research publications to support knowledge capture and synthesis, and the strong dependency of approaches to this information extraction task on syntactic information, it is valuable to understand which approaches to syntactic processing of biomedical text have the highest performance. We perform an empirical study comparing state-of-the-art traditional feature-based and neural network-based models for two core natural language processing tasks of part-of-speech (POS) tagging and dependency parsing on two benchmark biomedical corpora, GENIA and CRAFT. To the best of our knowledge, there is no recent work making such comparisons in the biomedical context; specifically no detailed analysis of neural models on this data is available. Experimental results show that in general, the neural models outperform the feature-based models on two benchmark biomedical corpora GENIA and CRAFT. We also perform a task-oriented evaluation to investigate the influences of these models in a downstream application on biomedical event extraction, and show that better intrinsic parsing performance does not always imply better extrinsic event extraction performance. We have presented a detailed empirical study comparing traditional feature-based and neural network-based models for POS tagging and dependency parsing in the biomedical context, and also investigated the influence of parser selection for a biomedical event extraction downstream task. We make the retrained models available at https://github.com/datquocnguyen/BioPosDep.",
"title": ""
},
{
"docid": "neg:1840184_15",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "neg:1840184_16",
"text": "Fault tolerance is gaining interest as a means to increase the reliability and availability of distributed energy systems. In this paper, a voltage-oriented doubly fed induction generator, which is often used in wind turbines, is examined. Furthermore, current, voltage, and position sensor fault detection, isolation, and reconfiguration are presented. Machine operation is not interrupted. A bank of observers provides residuals for fault detection and replacement signals for the reconfiguration. Control is temporarily switched from closed loop into open-loop to decouple the drive from faulty sensor readings. During a short period of open-loop operation, the fault is isolated using parity equations. Replacement signals from observers are used to reconfigure the drive and reenter closed-loop control. There are no large transients in the current. Measurement results and stability analysis show good results.",
"title": ""
},
{
"docid": "neg:1840184_17",
"text": "Mobile devices and their application marketplaces drive the entire economy of the today’s mobile landscape. Android platforms alone have produced staggering revenues, exceeding five billion USD, which has attracted cybercriminals and increased malware in Android markets at an alarming rate. To better understand this slew of threats, we present CopperDroid, an automatic VMI-based dynamic analysis system to reconstruct the behaviors of Android malware. The novelty of CopperDroid lies in its agnostic approach to identify interesting OSand high-level Android-specific behaviors. It reconstructs these behaviors by observing and dissecting system calls and, therefore, is resistant to the multitude of alterations the Android runtime is subjected to over its life-cycle. CopperDroid automatically and accurately reconstructs events of interest that describe, not only well-known process-OS interactions (e.g., file and process creation), but also complex intraand inter-process communications (e.g., SMS reception), whose semantics are typically contextualized through complex Android objects. Because CopperDroid’s reconstruction mechanisms are agnostic to the underlying action invocation methods, it is able to capture actions initiated both from Java and native code execution. CopperDroid’s analysis generates detailed behavioral profiles that abstract a large stream of low-level—often uninteresting—events into concise, high-level semantics, which are well-suited to provide insightful behavioral traits and open the possibility to further research directions. We carried out an extensive evaluation to assess the capabilities and performance of CopperDroid on more than 2,900 Android malware samples. Our experiments show that CopperDroid faithfully reconstructs OSand Android-specific behaviors. Additionally, we demonstrate how CopperDroid can be leveraged to disclose additional behaviors through the use of a simple, yet effective, app stimulation technique. Using this technique, we successfully triggered and disclosed additional behaviors on more than 60% of the analyzed malware samples. This qualitatively demonstrates the versatility of CopperDroid’s ability to improve dynamic-based code coverage.",
"title": ""
},
{
"docid": "neg:1840184_18",
"text": "We study the problem of computing all Pareto-optimal journeys in a dynamic public transit network for two criteria: arrival time and number of transfers. Existing algorithms consider this as a graph problem, and solve it using variants of Dijkstra’s algorithm. Unfortunately, this leads to either high query times or suboptimal solutions. We take a different approach. We introduce RAPTOR, our novel round-based public transit router. Unlike previous algorithms, it is not Dijkstrabased, looks at each route (such as a bus line) in the network at most once per round, and can be made even faster with simple pruning rules and parallelization using multiple cores. Because it does not rely on preprocessing, RAPTOR works in fully dynamic scenarios. Moreover, it can be easily extended to handle flexible departure times or arbitrary additional criteria, such as fare zones. When run on London’s complex public transportation network, RAPTOR computes all Paretooptimal journeys between two random locations an order of magnitude faster than previous approaches, which easily enables interactive applications.",
"title": ""
},
{
"docid": "neg:1840184_19",
"text": "Engagement is a key reason for introducing gamification to learning and thus serves as an important measurement of its effectiveness. Based on a literature review and meta-synthesis, this paper proposes a comprehensive framework of engagement in gamification for learning. The framework sketches out the connections among gamification strategies, dimensions of engagement, and the ultimate learning outcome. It also elicits other task - and user - related factors that may potentially impact the effect of gamification on learner engagement. To verify and further strengthen the framework, we conducted a user study to demonstrate that: 1) different gamification strategies can trigger different facets of engagement; 2) the three dimensions of engagement have varying effects on skill acquisition and transfer; and 3) task nature and learner characteristics that were overlooked in previous studies can influence the engagement process. Our framework provides an in-depth understanding of the mechanism of gamification for learning, and can serve as a theoretical foundation for future research and design.",
"title": ""
}
] |
1840185 | An ontology approach to object-based image retrieval | [
{
"docid": "pos:1840185_0",
"text": "THEORIES IN AI FALL INT O TWO broad categories: mechanismtheories and contenttheories. Ontologies are content the ories about the sor ts of objects, properties of objects,and relations between objects tha t re possible in a specif ed domain of kno wledge. They provide potential ter ms for descr ibing our knowledge about the domain. In this article, we survey the recent de velopment of the f ield of ontologies in AI. We point to the some what different roles ontologies play in information systems, naturallanguage under standing, and knowledgebased systems. Most r esear ch on ontologies focuses on what one might characterize as domain factual knowledge, because kno wlede of that type is par ticularly useful in natural-language under standing. There is another class of ontologies that are important in KBS—one that helps in shar ing knoweldge about reasoning str ategies or pr oblemsolving methods. In a f ollow-up article, we will f ocus on method ontolo gies.",
"title": ""
},
{
"docid": "pos:1840185_1",
"text": "ÐRetrieving images from large and varied collections using image content as a key is a challenging and important problem. We present a new image representation that provides a transformation from the raw pixel data to a small set of image regions that are coherent in color and texture. This aBlobworldo representation is created by clustering pixels in a joint color-texture-position feature space. The segmentation algorithm is fully automatic and has been run on a collection of 10,000 natural images. We describe a system that uses the Blobworld representation to retrieve images from this collection. An important aspect of the system is that the user is allowed to view the internal representation of the submitted image and the query results. Similar systems do not offer the user this view into the workings of the system; consequently, query results from these systems can be inexplicable, despite the availability of knobs for adjusting the similarity metrics. By finding image regions that roughly correspond to objects, we allow querying at the level of objects rather than global image properties. We present results indicating that querying for images using Blobworld produces higher precision than does querying using color and texture histograms of the entire image in cases where the image contains distinctive objects. Index TermsÐSegmentation and grouping, image retrieval, image querying, clustering, Expectation-Maximization.",
"title": ""
}
] | [
{
"docid": "neg:1840185_0",
"text": "In order to match shoppers with desired products and provide personalized promotions, whether in online or offline shopping worlds, it is critical to model both consumer preferences and price sensitivities simultaneously. Personalized preferences have been thoroughly studied in the field of recommender systems, though price (and price sensitivity) has received relatively little attention. At the same time, price sensitivity has been richly explored in the area of economics, though typically not in the context of developing scalable, working systems to generate recommendations. In this study, we seek to bridge the gap between large-scale recommender systems and established consumer theories from economics, and propose a nested feature-based matrix factorization framework to model both preferences and price sensitivities. Quantitative and qualitative results indicate the proposed personalized, interpretable and scalable framework is capable of providing satisfying recommendations (on two datasets of grocery transactions) and can be applied to obtain economic insights into consumer behavior.",
"title": ""
},
{
"docid": "neg:1840185_1",
"text": "The life time extension in the wireless sensor network (WSN) is the major concern in real time application, if the battery attached with the sensor node life is not optimized properly then the network life fall short. A protocol using a new evolutionary technique, cat swarm optimization (CSO), is designed and implemented in real time to minimize the intra-cluster distances between the cluster members and their cluster heads and optimize the energy distribution for the WSNs. We analyzed the performance of WSN protocol with the help of sensor nodes deployed in a field and grouped in to clusters. The novelty in our proposed scheme is considering the received signal strength, residual battery voltage and intra cluster distance of sensor nodes in cluster head selection with the help of CSO. The result is compared with the well-known protocol Low-energy adaptive clustering hierarchy-centralized (LEACH-C) and the swarm based optimization technique Particle swarm optimization (PSO). It was found that the battery energy level has been increased considerably of the traditional LEACH and PSO algorithm.",
"title": ""
},
{
"docid": "neg:1840185_2",
"text": "Processes causing greenhouse gas (GHG) emissions benefit humans by providing consumer goods and services. This benefit, and hence the responsibility for emissions, varies by purpose or consumption category and is unevenly distributed across and within countries. We quantify greenhouse gas emissions associated with the final consumption of goods and services for 73 nations and 14 aggregate world regions. We analyze the contribution of 8 categories: construction, shelter, food, clothing, mobility, manufactured products, services, and trade. National average per capita footprints vary from 1 tCO2e/y in African countries to approximately 30/y in Luxembourg and the United States. The expenditure elasticity is 0.57. The cross-national expenditure elasticity for just CO2, 0.81, corresponds remarkably well to the cross-sectional elasticities found within nations, suggesting a global relationship between expenditure and emissions that holds across several orders of magnitude difference. On the global level, 72% of greenhouse gas emissions are related to household consumption, 10% to government consumption, and 18% to investments. Food accounts for 20% of GHG emissions, operation and maintenance of residences is 19%, and mobility is 17%. Food and services are more important in developing countries, while mobility and manufactured goods rise fast with income and dominate in rich countries. The importance of public services and manufactured goods has not yet been sufficiently appreciated in policy. Policy priorities hence depend on development status and country-level characteristics.",
"title": ""
},
{
"docid": "neg:1840185_3",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "neg:1840185_4",
"text": "A method to design arbitrary three-way power dividers with ultra-wideband performance is presented. The proposed devices utilize a broadside-coupled structure, which has three coupled layers. The method assumes general asymmetric coupled layers. The design approach exploits the three fundamental modes of propagation: even-even, odd-odd, and odd-even, and the conformal mapping technique to find the coupling factors between the different layers. The method is used to design 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 three-way power dividers. The designed devices feature a multilayer broadside-coupled microstrip-slot-microstrip configuration using elliptical-shaped structures. The developed power dividers have a compact size with an overall dimension of 20 mm 30 mm. The simulated and measured results of the manufactured devices show an insertion loss equal to the nominated value 1 dB. The return loss for the input/output ports of the devices is better than 17, 18, and 13 dB, whereas the isolation between the output ports is better than 17, 14, and 15 dB for the 1 : 1 : 1, 2 : 1 : 1, and 4 : 2 : 1 dividers, respectively, across the 3.1-10.6-GHz band.",
"title": ""
},
{
"docid": "neg:1840185_5",
"text": "Recommender systems for automatically suggested items of interest to users have become increasingly essential in fields where mass personalization is highly valued. The popular core techniques of such systems are collaborative filtering, content-based filtering and combinations of these. In this paper, we discuss hybrid approaches, using collaborative and also content data to address cold-start - that is, giving recommendations to novel users who have no preference on any items, or recommending items that no user of the community has seen yet. While there have been lots of studies on solving the item-side problems, solution for user-side problems has not been seen public. So we develop a hybrid model based on the analysis of two probabilistic aspect models using pure collaborative filtering to combine with users' information. The experiments with MovieLen data indicate substantial and consistent improvements of this model in overcoming the cold-start user-side problem.",
"title": ""
},
{
"docid": "neg:1840185_6",
"text": "This article discusses the logical implementation of the media access control and the physical layer of 100 Gb/s Ethernet. The target are a MAC/PCS LSI, supporting MAC and physical coding sublayer, and a gearbox LSI, providing 10:4 parallel lane-width exchange inside an optical module. The two LSIs are connected by a 100 gigabit attachment unit interface, which consists of ten 10 Gb/s lines. We realized a MAC/PCS logical circuit with a low-frequency clock on a FPGA, whose size is 250 kilo LUTs with a 5.7 Mbit RAM, and the power consumption of the gearbox LSI estimated to become 2.3 W.",
"title": ""
},
{
"docid": "neg:1840185_7",
"text": "METHODOLOGY AND PRINCIPAL FINDINGS\nOleuropein promoted cultured human follicle dermal papilla cell proliferation and induced LEF1 and Cyc-D1 mRNA expression and β-catenin protein expression in dermal papilla cells. Nuclear accumulation of β-catenin in dermal papilla cells was observed after oleuropein treatment. Topical application of oleuropein (0.4 mg/mouse/day) to C57BL/6N mice accelerated the hair-growth induction and increased the size of hair follicles in telogenic mouse skin. The oleuropein-treated mouse skin showed substantial upregulation of Wnt10b, FZDR1, LRP5, LEF1, Cyc-D1, IGF-1, KGF, HGF, and VEGF mRNA expression and β-catenin protein expression.\n\n\nCONCLUSIONS AND SIGNIFICANCE\nThese results demonstrate that topical oleuroepin administration induced anagenic hair growth in telogenic C57BL/6N mouse skin. The hair-growth promoting effect of oleuropein in mice appeared to be associated with the stimulation of the Wnt10b/β-catenin signaling pathway and the upregulation of IGF-1, KGF, HGF, and VEGF gene expression in mouse skin tissue.",
"title": ""
},
{
"docid": "neg:1840185_8",
"text": "In today’s economic turmoil, the pay-per-use pricing model of cloud computing, its flexibility and scalability and the potential for better security and availability levels are alluring to both SMEs and large enterprises. However, cloud computing is fraught with security risks which need to be carefully evaluated before any engagement in this area. This article elaborates on the most important risks inherent to the cloud such as information security, regulatory compliance, data location, investigative support, provider lock-in and disaster recovery. We focus on risk and control analysis in relation to a sample of Swiss companies with regard to their prospective adoption of public cloud services. We observe a sufficient degree of risk awareness with a focus on those risks that are relevant to the IT function to be migrated to the cloud. Moreover, the recommendations as to the adoption of cloud services depend on the company’s size with larger and more technologically advanced companies being better prepared for the cloud. As an exploratory first step, the results of this study would allow us to design and implement broader research into cloud computing risk management in Switzerland.",
"title": ""
},
{
"docid": "neg:1840185_9",
"text": "BACKGROUND\nBacterial vaginosis (BV) has been most consistently linked to sexual behaviour, and the epidemiological profile of BV mirrors that of established sexually transmitted infections (STIs). It remains a matter of debate however whether BV pathogenesis does actually involve sexual transmission of pathogenic micro-organisms from men to women. We therefore made a critical appraisal of the literature on BV in relation to sexual behaviour.\n\n\nDISCUSSION\nG. vaginalis carriage and BV occurs rarely with children, but has been observed among adolescent, even sexually non-experienced girls, contradicting that sexual transmission is a necessary prerequisite to disease acquisition. G. vaginalis carriage is enhanced by penetrative sexual contact but also by non-penetrative digito-genital contact and oral sex, again indicating that sex per se, but not necessarily coital transmission is involved. Several observations also point at female-to-male rather than at male-to-female transmission of G. vaginalis, presumably explaining the high concordance rates of G. vaginalis carriage among couples. Male antibiotic treatment has not been found to protect against BV, condom use is slightly protective, whereas male circumcision might protect against BV. BV is also common among women-who-have-sex-with-women and this relates at least in part to non-coital sexual behaviours. Though male-to-female transmission cannot be ruled out, overall there is little evidence that BV acts as an STD. Rather, we suggest BV may be considered a sexually enhanced disease (SED), with frequency of intercourse being a critical factor. This may relate to two distinct pathogenetic mechanisms: (1) in case of unprotected intercourse alkalinisation of the vaginal niche enhances a shift from lactobacilli-dominated microflora to a BV-like type of microflora and (2) in case of unprotected and protected intercourse mechanical transfer of perineal enteric bacteria is enhanced by coitus. A similar mechanism of mechanical transfer may explain the consistent link between non-coital sexual acts and BV. Similar observations supporting the SED pathogenetic model have been made for vaginal candidiasis and for urinary tract infection.\n\n\nSUMMARY\nThough male-to-female transmission cannot be ruled out, overall there is incomplete evidence that BV acts as an STI. We believe however that BV may be considered a sexually enhanced disease, with frequency of intercourse being a critical factor.",
"title": ""
},
{
"docid": "neg:1840185_10",
"text": "In presenting this thesis in partial fulfillment of the requirements for a Master's degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this thesis is allowable only for scholarly purposes, consistent with \"fair use\" as prescribed in the U.S. Copyright Law. Any other reproduction for any purposes or by any means shall not be allowed without my written permission. PREFACE Over the last several years, professionals from many different fields have come to the Human Interface Technology Laboratory (H.I.T.L) to discover and learn about virtual environments. In general, they are impressed by their experiences and express the tremendous potential the tool has in their respective fields. But the potentials are always projected far in the future, and the tool remains just a concept. This is justifiable because the quality of the visual experience is so much less than what people are used to seeing; high definition television, breathtaking special cinematographic effects and photorealistic computer renderings. Instead, the models in virtual environments are very simple looking; they are made of small spaces, filled with simple or abstract looking objects of little color distinctions as seen through displays of noticeably low resolution and at an update rate which leaves much to be desired. Clearly, for most applications, the requirements of precision have not been met yet with virtual interfaces as they exist today. However, there are a few domains where the relatively low level of the technology could be perfectly appropriate. In general, these are applications which require that the information be presented in symbolic or representational form. Having studied architecture, I knew that there are moments during the early part of the design process when conceptual decisions are made which require precisely the simple and representative nature available in existing virtual environments. This was a marvelous discovery for me because I had found a viable use for virtual environments which could be immediately beneficial to architecture, my shared area of interest. It would be further beneficial to architecture in that the virtual interface equipment I would be evaluating at the H.I.T.L. happens to be relatively less expensive and more practical than other configurations such as the \"Walkthrough\" at the University of North Carolina. The setup at the H.I.T.L. could be easily introduced into architectural firms because it takes up very little physical room (150 …",
"title": ""
},
{
"docid": "neg:1840185_11",
"text": "BACKGROUND\nThe efficacy of new antihypertensive drugs has been questioned. We compared the effects of conventional and newer antihypertensive drugs on cardiovascular mortality and morbidity in elderly patients.\n\n\nMETHODS\nWe did a prospective, randomised trial in 6614 patients aged 70-84 years with hypertension (blood pressure > or = 180 mm Hg systolic, > or = 105 mm Hg diastolic, or both). Patients were randomly assigned conventional antihypertensive drugs (atenolol 50 mg, metoprolol 100 mg, pindolol 5 mg, or hydrochlorothiazide 25 mg plus amiloride 2.5 mg daily) or newer drugs (enalapril 10 mg or lisinopril 10 mg, or felodipine 2.5 mg or isradipine 2-5 mg daily). We assessed fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease. Analysis was by intention to treat.\n\n\nFINDINGS\nBlood pressure was decreased similarly in all treatment groups. The primary combined endpoint of fatal stroke, fatal myocardial infarction, and other fatal cardiovascular disease occurred in 221 of 2213 patients in the conventional drugs group (19.8 events per 1000 patient-years) and in 438 of 4401 in the newer drugs group (19.8 per 1000; relative risk 0.99 [95% CI 0.84-1.16], p=0.89). The combined endpoint of fatal and non-fatal stroke, fatal and non-fatal myocardial infarction, and other cardiovascular mortality occurred in 460 patients taking conventional drugs and in 887 taking newer drugs (0.96 [0.86-1.08], p=0.49).\n\n\nINTERPRETATION\nOld and new antihypertensive drugs were similar in prevention of cardiovascular mortality or major events. Decrease in blood pressure was of major importance for the prevention of cardiovascular events.",
"title": ""
},
{
"docid": "neg:1840185_12",
"text": "Metastasis remains the greatest challenge in the clinical management of cancer. Cell motility is a fundamental and ancient cellular behaviour that contributes to metastasis and is conserved in simple organisms. In this Review, we evaluate insights relevant to human cancer that are derived from the study of cell motility in non-mammalian model organisms. Dictyostelium discoideum, Caenorhabditis elegans, Drosophila melanogaster and Danio rerio permit direct observation of cells moving in complex native environments and lend themselves to large-scale genetic and pharmacological screening. We highlight insights derived from each of these organisms, including the detailed signalling network that governs chemotaxis towards chemokines; a novel mechanism of basement membrane invasion; the positive role of E-cadherin in collective direction-sensing; the identification and optimization of kinase inhibitors for metastatic thyroid cancer on the basis of work in flies; and the value of zebrafish for live imaging, especially of vascular remodelling and interactions between tumour cells and host tissues. While the motility of tumour cells and certain host cells promotes metastatic spread, the motility of tumour-reactive T cells likely increases their antitumour effects. Therefore, it is important to elucidate the mechanisms underlying all types of cell motility, with the ultimate goal of identifying combination therapies that will increase the motility of beneficial cells and block the spread of harmful cells.",
"title": ""
},
{
"docid": "neg:1840185_13",
"text": "Deep Neural Networks (DNNs) are hierarchical nonlinear architectures that have been widely used in artificial intelligence applications. However, these models are vulnerable to adversarial perturbations which add changes slightly and are crafted explicitly to fool the model. Such attacks will cause the neural network to completely change its classification of data. Although various defense strategies have been proposed, existing defense methods have two limitations. First, the discovery success rate is not very high. Second, existing methods depend on the output of a particular layer in a specific learning structure. In this paper, we propose a powerful method for adversarial samples using Large Margin Cosine Estimate(LMCE). By iteratively calculating the large-margin cosine uncertainty estimates between the model predictions, the results can be regarded as a novel measurement of model uncertainty estimation and is available to detect adversarial samples by training using a simple machine learning algorithm. Comparing it with the way in which adversar- ial samples are generated, it is confirmed that this measurement can better distinguish hostile disturbances. We modeled deep neural network attacks and established defense mechanisms against various types of adversarial attacks. Classifier gets better performance than the baseline model. The approach is validated on a series of standard datasets including MNIST and CIFAR −10, outperforming previous ensemble method with strong statistical significance. Experiments indicate that our approach generalizes better across different architectures and attacks.",
"title": ""
},
{
"docid": "neg:1840185_14",
"text": "Traffic light detection from a moving vehicle is an important technology both for new safety driver assistance functions as well as for autonomous driving in the city. In this paper we present a machine learning framework for detection of traffic lights that can handle in realtime both day and night situations in a unified manner. A semantic segmentation method is employed to generate traffic light candidates, which are then confirmed and classified by a geometric and color features based classifier. Temporal consistency is enforced by using a tracking by detection method. We evaluate our method on a publicly available dataset recorded at daytime in order to compare to existing methods and we show similar performance. We also present an evaluation on two additional datasets containing more than 50 intersections with multiple traffic lights recorded both at day and during nighttime and we show that our method performs consistently in those situations.",
"title": ""
},
{
"docid": "neg:1840185_15",
"text": "In this paper, we address the fusion problem of two estimates, where the cross-correlation between the estimates is unknown. To solve the problem within the Bayesian framework, we assume that the covariance matrix has a prior distribution. We also assume that we know the covariance of each estimate, i.e., the diagonal block of the entire co-variance matrix (of the random vector consisting of the two estimates). We then derive the conditional distribution of the off-diagonal blocks, which is the cross-correlation of our interest. The conditional distribution happens to be the inverted matrix variate t-distribution. We can readily sample from this distribution and use a Monte Carlo method to compute the minimum mean square error estimate for the fusion problem. Simulations show that the proposed method works better than the popular covariance intersection method.",
"title": ""
},
{
"docid": "neg:1840185_16",
"text": "Skin cancer, the most common human malignancy, is primarily diagnosed visually by physicians . Classification with an automated method like CNN [2, 3] shows potential for diagnosing the skin cancer according to the medical photographs. By now, the deep convolutional neural networks can achieve the level of human dermatologist . This work is dedicated on developing a Deep Learning method for ISIC [5] 2017 Skin Lesion Detection Competition to classify the dermatology pictures, which is aiming at improving the diagnostic accuracy rate. As an result, it will improve the general level of the human health. The challenge falls into three sub-challenges, including Lesion Segmentation, Lesion Dermoscopic Feature Extraction and Lesion Classification. We focus on the Lesion Classification task. The proposed algorithm is comprised of three steps: (1) original images preprocessing, (2) modelling the processed images using CNN [2, 3] in Caffe [4] framework, (3) predicting the test images and calculating the scores that represent the likelihood of corresponding classification. The models are built on the source images are using the Caffe [4] framework. The scores in prediction step are obtained by two different models from the source images.",
"title": ""
},
{
"docid": "neg:1840185_17",
"text": "In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.",
"title": ""
},
{
"docid": "neg:1840185_18",
"text": "In the last few years progress has been made in understanding basic mechanisms involved in damage to the inner ear and various potential therapeutic approaches have been developed. It was shown that hair cell loss mediated by noise or toxic drugs may be prevented by antioxidants, inhibitors of intracellular stress pathways and neurotrophic factors/neurotransmission blockers. Moreover, there is hope that once hair cells are lost, their regeneration can be induced or that stem cells can be used to build up new hair cells. However, although tremendous progress has been made, most of the concepts discussed in this review are still in the \"animal stage\" and it is difficult to predict which approach will finally enter clinical practice. In my opinion it is highly probable that some concepts of hair cell protection will enter clinical practice first, while others, such as the use of stem cells to restore hearing, are still far from clinical utility.",
"title": ""
}
] |
1840186 | Secure control against replay attacks | [
{
"docid": "pos:1840186_0",
"text": "This paper considers control and estimation problems where the sensor signals and the actuator signals are transmitted to various subsystems over a network. In contrast to traditional control and estimation problems, here the observation and control packets may be lost or delayed. The unreliability of the underlying communication network is modeled stochastically by assigning probabilities to the successful transmission of packets. This requires a novel theory which generalizes classical control/estimation paradigms. The paper offers the foundations of such a novel theory. The central contribution is to characterize the impact of the network reliability on the performance of the feedback loop. Specifically, it is shown that for network protocols where successful transmissions of packets is acknowledged at the receiver (e.g., TCP-like protocols), there exists a critical threshold of network reliability (i.e., critical probabilities for the successful delivery of packets), below which the optimal controller fails to stabilize the system. Further, for these protocols, the separation principle holds and the optimal LQG controller is a linear function of the estimated state. In stark contrast, it is shown that when there is no acknowledgement of successful delivery of control packets (e.g., UDP-like protocols), the LQG optimal controller is in general nonlinear. Consequently, the separation principle does not hold in this circumstance",
"title": ""
},
{
"docid": "pos:1840186_1",
"text": "Process control and SCADA systems, with their reliance on proprietary networks and hardware, have long been considered immune to the network attacks that have wreaked so much havoc on corporate information systems. Unfortunately, new research indicates this complacency is misplaced – the move to open standards such as Ethernet, TCP/IP and web technologies is letting hackers take advantage of the control industry’s ignorance. This paper summarizes the incident information collected in the BCIT Industrial Security Incident Database (ISID), describes a number of events that directly impacted process control systems and identifies the lessons that can be learned from these security events.",
"title": ""
},
{
"docid": "pos:1840186_2",
"text": "In this paper we attempt to answer two questions: (1) Why should we be interested in the security of control systems? And (2) What are the new and fundamentally different requirements and problems for the security of control systems? We also propose a new mathematical framework to analyze attacks against control systems. Within this framework we formulate specific research problems to (1) detect attacks, and (2) survive attacks.",
"title": ""
}
] | [
{
"docid": "neg:1840186_0",
"text": "A statistical pattern-recognition technique was applied to the classification of musical instrument tones within a taxonomic hierarchy. Perceptually salient acoustic features— related to the physical properties of source excitation and resonance structure—were measured from the output of an auditory model (the log-lag correlogram) for 1023 isolated tones over the full pitch ranges of 15 orchestral instruments. The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Using 70%/30% splits between training and test data, maximum a posteriori classifiers were constructed based on Gaussian models arrived at through Fisher multiplediscriminant analysis. The classifiers distinguished transient from continuant tones with approximately 99% correct performance. Instrument families were identified with approximately 90% performance, and individual instruments were identified with an overall success rate of approximately 70%. These preliminary analyses compare favorably with human performance on the same task and demonstrate the utility of the hierarchical approach to classification.",
"title": ""
},
{
"docid": "neg:1840186_1",
"text": "Vaginal fibroepithelial polyp is a rare lesion, and although benign, it can be confused with malignant connective tissue lesions. Treatment is simple excision, and recurrence is extremely uncommon. We report a case of a newborn with vaginal fibroepithelial polyp. The authors suggest that vaginal polyp must be considered in the evaluation of interlabial masses in prepubertal girls.",
"title": ""
},
{
"docid": "neg:1840186_2",
"text": "Mesh slicing is one of the most common operations in additive manufacturing (AM). However, the computing burden for such an application is usually very heavy, especially when dealing with large models. Nowadays the graphics processing units (GPU) have abundant resources and it is reasonable to utilize the computing power of GPU for mesh slicing. In the paper, we propose a parallel implementation of the slicing algorithm using GPU. We test the GPU-accelerated slicer on several models and obtain a speedup factor of about 30 when dealing with large models, compared with the CPU implementation. Results show the power of GPU on the mesh slicing problem. In the future, we will extend our work and standardize the slicing process.",
"title": ""
},
{
"docid": "neg:1840186_3",
"text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.",
"title": ""
},
{
"docid": "neg:1840186_4",
"text": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout.",
"title": ""
},
{
"docid": "neg:1840186_5",
"text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].",
"title": ""
},
{
"docid": "neg:1840186_6",
"text": "In many real world applications of machine learning, the distribution of the training data (on which the machine learning model is trained) is different from the distribution of the test data (where the learnt model is actually deployed). This is known as the problem of Domain Adaptation. We propose a novel deep learning model for domain adaptation which attempts to learn a predictively useful representation of the data by taking into account information from the distribution shift between the training and test data. Our key proposal is to successively learn multiple intermediate representations along an “interpolating path” between the train and test domains. Our experiments on a standard object recognition dataset show a significant performance improvement over the state-of-the-art. 1. Problem Motivation and Context Oftentimes in machine learning applications, we have to learn a model to accomplish a specific task using training data drawn from one distribution (the source domain), and deploy the learnt model on test data drawn from a different distribution (the target domain). For instance, consider the task of creating a mobile phone application for “image search for products”; where the goal is to look up product specifications and comparative shopping options from the internet, given a picture of the product taken with a user’s mobile phone. In this case, the underlying object recognizer will typically be trained on a labeled corpus of images (perhaps scraped from the internet), and tested on the images taken using the user’s phone camera. The challenge here is that the distribution of training and test images is not the same. A naively Appeared in the proceedings of the ICML 2013, Workshop on Representation Learning, Atlanta, Georgia, USA, 2013. trained object recognizer, that is just trained on the training images and applied directly to the test images, cannot be expected to have good performance. Such issues of a mismatched train and test sets occur not only in the field of Computer Vision (Duan et al., 2009; Jain & Learned-Miller, 2011; Wang & Wang, 2011), but also in Natural Language Processing (Blitzer et al., 2006; 2007; Glorot et al., 2011), and Automatic Speech Recognition (Leggetter & Woodland, 1995). The problem of differing train and test data distributions is referred to as Domain Adaptation (Daume & Marcu, 2006; Daume, 2007). Two variations of this problem are commonly discussed in the literature. In the first variation, known as Unsupervised Domain Adaptation, no target domain labels are provided during training. One only has access to source domain labels. In the second version of the problem, called Semi-Supervised Domain Adaptation, besides access to source domain labels, we additionally assume access to a few target domain labels during training. Previous approaches to domain adaptation can broadly be classified into a few main groups. One line of research starts out assuming the input representations are fixed (the features given are not learnable) and seeks to address domain shift by modeling the source/target distributional difference via transformations of the given representation. These transformations lead to a different distance metric which can be used in the domain adaptation classification/regression task. This is the approach taken, for instance, in (Saenko et al., 2010) and the recent linear manifold papers of (Gopalan et al., 2011; Gong et al., 2012). Another set of approaches in this fixed representation view of the problem treats domain adaptation as a conventional semi-supervised learning (Bergamo & Torresani, 2010; Dai et al., 2007; Yang et al., 2007; Duan et al., 2012). These works essentially construct a classifier using the labeled source data, and Often, the number of such labelled target samples are not sufficient to train a robust model using target data alone. DLID: Deep Learning for Domain Adaptation by Interpolating between Domains impose structural constraints on the classifier using unlabeled target data. A second line of research focusses on directly learning the representation of the inputs that is somewhat invariant across domains. Various models have been proposed (Daume, 2007; Daume et al., 2010; Blitzer et al., 2006; 2007; Pan et al., 2009), including deep learning models (Glorot et al., 2011). There are issues with both kinds of the previous proposals. In the fixed representation camp, the type of projection or structural constraint imposed often severely limits the capacity/strength of representations (linear projections for example, are common). In the representation learning camp, existing deep models do not attempt to explicitly encode the distributional shift between the source and target domains. In this paper we propose a novel deep learning model for the problem of domain adaptation which combines ideas from both of the previous approaches. We call our model (DLID): Deep Learning for domain adaptation by Interpolating between Domains. By operating in the deep learning paradigm, we also learn hierarchical non-linear representation of the source and target inputs. However, we explicitly define and use an “interpolating path” between the source and target domains while learning the representation. This interpolating path captures information about structures intermediate to the source and target domains. The resulting representation we obtain is highly rich (containing source to target path information) and allows us to handle the domain adaptation task extremely well. There are multiple benefits to our approach compared to those proposed in the literature. First, we are able to train intricate non-linear representations of the input, while explicitly modeling the transformation between the source and target domains. Second, instead of learning a representation which is independent of the final task, our model can learn representations with information from the final classification/regression task. This is achieved by fine-tuning the pre-trained intermediate feature extractors using feedback from the final task. Finally, our approach can gracefully handle additional training data being made available in the future. We would simply fine-tune our model with the new data, as opposed to having to retrain the entire model again from scratch. We evaluate our model on the domain adaptation problem of object recognition on a standard dataset (Saenko et al., 2010). Empirical results show that our model out-performs the state of the art by a significant margin. In some cases there is an improvement of over 40% from the best previously reported results. An analysis of the learnt representations sheds some light onto the properties that result in such excellent performance (Ben-David et al., 2007). 2. An Overview of DLID At a high level, the DLID model is a deep neural network model designed specifically for the problem of domain adaptation. Deep networks have had tremendous success recently, achieving state-of-the-art performance on a number of machine learning tasks (Bengio, 2009). In large part, their success can be attributed to their ability to learn extremely powerful hierarchical non-linear representations of the inputs. In particular, breakthroughs in unsupervised pre-training (Bengio et al., 2006; Hinton et al., 2006; Hinton & Salakhutdinov, 2006; Ranzato et al., 2006), have been critical in enabling deep networks to be trained robustly. As with other deep neural network models, DLID also learns its representation using unsupervised pretraining. The key difference is that in DLID model, we explicitly capture information from an “interpolating path” between the source domain and the target domain. As mentioned in the introduction, our interpolating path is motivated by the ideas discussed in Gopalan et al. (2011); Gong et al. (2012). In these works, the original high dimensional features are linearly projected (typically via PCA/PLS) to a lower dimensional space. Because these are linear projections, the source and target lower dimensional subspaces lie on the Grassman manifold. Geometric properties of the manifold, like shortest paths (geodesics), present an interesting and principled way to transition/interpolate smoothly between the source and target subspaces. It is this path information on the manifold that is used by Gopalan et al. (2011); Gong et al. (2012) to construct more robust and accurate classifiers for the domain adaptation task. In DLID, we define a somewhat different notion of an interpolating path between source and target domains, but appeal to a similar intuition. Figure 1 shows an illustration of our model. Let the set of data samples for the source domain S be denoted by DS , and that of the target domain T be denoted by DT . Starting with all the source data samples DS , we generate intermediate sampled datasets, where for each successive dataset we gradually increase the proportion of samples randomly drawn from DT , and decrease the proportion of samples drawn from DS . In particular, let p ∈ [1, . . . , P ] be an index over the P datasets we generate. Then we have Dp = DS for p = 1, Dp = DT for p = P . For p ∈ [2, . . . , P − 1], datasets Dp and Dp+1 are created in a way so that the proportion of samples from DT in Dp is less than in Dp+1. Each of these data sets can be thought of as a single point on a particular kind of interpolating path between S and T . DLID: Deep Learning for Domain Adaptation by Interpolating between Domains",
"title": ""
},
{
"docid": "neg:1840186_7",
"text": "A new population-based search algorithm called the Bees Algorithm (BA) is presented in this paper. The algorithm mimics the food foraging behavior of swarms of honey bees. This algorithm performs a kind of neighborhood search combined with random search and can be used for both combinatorial optimization and functional optimization and with good numerical optimization results. ABC is a meta-heuristic optimization technique inspired by the intelligent foraging behavior of honeybee swarms. This paper demonstrates the efficiency and robustness of the ABC algorithm to solve MDVRP (Multiple depot vehicle routing problems). KeywordsSwarm intelligence, ant colony optimization, Genetic Algorithm, Particle Swarm optimization, Artificial Bee Colony optimization.",
"title": ""
},
{
"docid": "neg:1840186_8",
"text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.",
"title": ""
},
{
"docid": "neg:1840186_9",
"text": "The economic viability of colonizing Mars is examined. It is shown, that of all bodies in the solar system other than Earth, Mars is unique in that it has the resources required to support a population of sufficient size to create locally a new branch of human civilization. It is also shown that while Mars may lack any cash material directly exportable to Earth, Mars’ orbital elements and other physical parameters gives a unique positional advantage that will allow it to act as a keystone supporting extractive activities in the asteroid belt and elsewhere in the solar system. The potential of relatively near-term types of interplanetary transportation systems is examined, and it is shown that with very modest advances on a historical scale, systems can be put in place that will allow individuals and families to emigrate to Mars at their own discretion. Their motives for doing so will parallel in many ways the historical motives for Europeans and others to come to America, including higher pay rates in a labor-short economy, escape from tradition and oppression, as well as freedom to exercise their drive to create in an untamed and undefined world. Under conditions of such large scale immigration, sale of real-estate will add a significant source of income to the planet’s economy. Potential increases in real-estate values after terraforming will provide a sufficient financial incentive to do so. In analogy to frontier America, social conditions on Mars will make it a pressure cooker for invention. These inventions, licensed on Earth, will raise both Terrestrial and Martian living standards and contribute large amounts of income to support the development of the colony.",
"title": ""
},
{
"docid": "neg:1840186_10",
"text": "Organizations are attempting to leverage their knowledge resources by employing knowledge management (KM) systems, a key form of which are electronic knowledge repositories (EKRs). A large number of KM initiatives fail due to reluctance of employees to share knowledge through these systems. Motivated by such concerns, this study formulates and tests a theoretical model to explain EKR usage by knowledge contributors. The model employs social exchange theory to identify cost and benefit factors affecting EKR usage, and social capital theory to account for the moderating influence of contextual factors. The model is validated through a large-scale survey of public sector organizations. The results reveal that knowledge self-efficacy and enjoyment in helping others significantly impact EKR usage by knowledge contributors. Contextual factors (generalized trust, pro-sharing norms, and identification) moderate the impact of codification effort, reciprocity, and organizational reward on EKR usage, respectively. It can be seen that extrinsic benefits (reciprocity and organizational reward) impact EKR usage contingent on particular contextual factors whereas the effects of intrinsic benefits (knowledge self-efficacy and enjoyment in helping others) on EKR usage are not moderated by contextual factors. The loss of knowledge power and image do not appear to impact EKR usage by knowledge contributors. Besides contributing to theory building in KM, the results of this study inform KM practice.",
"title": ""
},
{
"docid": "neg:1840186_11",
"text": "BACKGROUND\nThe choice of antimicrobials for initial treatment of peritoneal dialysis (PD)-related peritonitis is crucial for a favorable outcome. There is no consensus about the best therapy; few prospective controlled studies have been published, and the only published systematic reviews did not report superiority of any class of antimicrobials. The objective of this review was to analyze the results of PD peritonitis treatment in adult patients by employing a new methodology, the proportional meta-analysis.\n\n\nMETHODS\nA review of the literature was conducted. There was no language restriction. Studies were obtained from MEDLINE, EMBASE, and LILACS. The inclusion criteria were: (a) case series and RCTs with the number of reported patients in each study greater than five, (b) use of any antibiotic therapy for initial treatment (e.g., cefazolin plus gentamicin or vancomycin plus gentamicin), for Gram-positive (e.g., vancomycin or a first generation cephalosporin), or for Gram-negative rods (e.g., gentamicin, ceftazidime, and fluoroquinolone), (c) patients with PD-related peritonitis, and (d) studies specifying the rates of resolution. A proportional meta-analysis was performed on outcomes using a random-effects model, and the pooled resolution rates were calculated.\n\n\nRESULTS\nA total of 64 studies (32 for initial treatment and negative culture, 28 reporting treatment for Gram-positive rods and 24 reporting treatment for Gram-negative rods) and 21 RCTs met all inclusion criteria (14 for initial treatment and negative culture, 8 reporting treatment for Gram-positive rods and 8 reporting treatment for Gram-negative rods). The pooled resolution rate of ceftazidime plus glycopeptide as initial treatment (pooled proportion = 86% [95% CI 0.82-0.89]) was significantly higher than first generation cephalosporin plus aminoglycosides (pooled proportion = 66% [95% CI 0.57-0.75]) and significantly higher than glycopeptides plus aminoglycosides (pooled proportion = 75% [95% CI 0.69-0.80]. Other comparisons of regimens used for either initial treatment, treatment for Gram-positive rods or Gram-negative rods did not show statistically significant differences.\n\n\nCONCLUSION\nWe showed that the association of a glycopeptide plus ceftazidime is superior to other regimens for initial treatment of PD peritonitis. This result should be carefully analyzed and does not exclude the necessity of monitoring the local microbiologic profile in each dialysis center to choice the initial therapeutic protocol.",
"title": ""
},
{
"docid": "neg:1840186_12",
"text": "Mobile applications usually need to be provided for more than one operating system. Developing native apps separately for each platform is a laborious and expensive undertaking. Hence, cross-platform approaches have emerged, most of them based on Web technologies. While these enable developers to use a single code base for all platforms, resulting apps lack a native look & feel. This, however, is often desired by users and businesses. Furthermore, they have a low abstraction level. We propose MD2, an approach for model-driven cross-platform development of apps. With MD2, developers specify an app in a high-level (domain-specific) language designed for describing business apps succinctly. From this model, purely native apps for Android and iOS are automatically generated. MD2 was developed in close cooperation with industry partners and provides means to develop data-driven apps with a native look and feel. Apps can access the device hardware and interact with remote servers.",
"title": ""
},
{
"docid": "neg:1840186_13",
"text": "Despite the flourishing research on the relationships between affect and language, the characteristics of pain-related words, a specific type of negative words, have never been systematically investigated from a psycholinguistic and emotional perspective, despite their psychological relevance. This study offers psycholinguistic, affective, and pain-related norms for words expressing physical and social pain. This may provide a useful tool for the selection of stimulus materials in future studies on negative emotions and/or pain. We explored the relationships between psycholinguistic, affective, and pain-related properties of 512 Italian words (nouns, adjectives, and verbs) conveying physical and social pain by asking 1020 Italian participants to provide ratings of Familiarity, Age of Acquisition, Imageability, Concreteness, Context Availability, Valence, Arousal, Pain-Relatedness, Intensity, and Unpleasantness. We also collected data concerning Length, Written Frequency (Subtlex-IT), N-Size, Orthographic Levenshtein Distance 20, Neighbor Mean Frequency, and Neighbor Maximum Frequency of each word. Interestingly, the words expressing social pain were rated as more negative, arousing, pain-related, and conveying more intense and unpleasant experiences than the words conveying physical pain.",
"title": ""
},
{
"docid": "neg:1840186_14",
"text": "With the tremendous amount of textual data available in the Internet, techniques for abstractive text summarization become increasingly appreciated. In this paper, we present work in progress that tackles the problem of multilingual text summarization using semantic representations. Our system is based on abstract linguistic structures obtained from an analysis pipeline of disambiguation, syntactic and semantic parsing tools. The resulting structures are stored in a semantic repository, from which a text planning component produces content plans that go through a multilingual generation pipeline that produces texts in English, Spanish, French, or German. In this paper we focus on the lingusitic components of the summarizer, both analysis and generation.",
"title": ""
},
{
"docid": "neg:1840186_15",
"text": "Efficient storage and querying of large repositories of RDF content is important due to the widespread growth of Semantic Web and Linked Open Data initiatives. Many novel database systems that store RDF in its native form or within traditional relational storage have demonstrated their ability to scale to large volumes of RDF content. However, it is increasingly becoming obvious that the simple dyadic relationship captured through traditional triples alone is not sufficient for modelling multi-entity relationships, provenance of facts, etc. Such richer models are supported in RDF through two techniques - first, called reification which retains the triple nature of RDF and the second, a non-standard extension called N-Quads. In this paper, we explore the challenges of supporting such richer semantic data by extending the state-of-the-art RDF-3X system. We describe our implementation of RQ-RDF-3X, a reification and quad enhanced RDF-3X, which involved a significant re-engineering ranging from the set of indexes and their compression schemes to the query processing pipeline for queries over reified content. Using large RDF repositories such as YAGO2S and DBpedia, and a set of SPARQL queries that utilize reification model, we demonstrate that RQ-RDF-3X is significantly faster than RDF-3X.",
"title": ""
},
{
"docid": "neg:1840186_16",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "neg:1840186_17",
"text": "Combined chromatin immunoprecipitation and next-generation sequencing (ChIP-seq) has enabled genome-wide epigenetic profiling of numerous cell lines and tissue types. A major limitation of ChIP-seq, however, is the large number of cells required to generate high-quality data sets, precluding the study of rare cell populations. Here, we present an ultra-low-input micrococcal nuclease-based native ChIP (ULI-NChIP) and sequencing method to generate genome-wide histone mark profiles with high resolution from as few as 10(3) cells. We demonstrate that ULI-NChIP-seq generates high-quality maps of covalent histone marks from 10(3) to 10(6) embryonic stem cells. Subsequently, we show that ULI-NChIP-seq H3K27me3 profiles generated from E13.5 primordial germ cells isolated from single male and female embryos show high similarity to recent data sets generated using 50-180 × more material. Finally, we identify sexually dimorphic H3K27me3 enrichment at specific genic promoters, thereby illustrating the utility of this method for generating high-quality and -complexity libraries from rare cell populations.",
"title": ""
},
{
"docid": "neg:1840186_18",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
1840187 | A review on deep learning for recommender systems: challenges and remedies | [
{
"docid": "pos:1840187_0",
"text": "Recommender systems based on deep learning technology pay huge attention recently. In this paper, we propose a collaborative filtering based recommendation algorithm that utilizes the difference of similarities among users derived from different layers in stacked denoising autoencoders. Since different layers in a stacked autoencoder represent the relationships among items with rating at different levels of abstraction, we can expect to make recommendations more novel, various and serendipitous, compared with a normal collaborative filtering using single similarity. The results of experiments using MovieLens dataset show that the proposed recommendation algorithm can improve the diversity of recommendation lists without great loss of accuracy.",
"title": ""
},
{
"docid": "pos:1840187_1",
"text": "Collaborative Filtering with Implicit Feedbacks (e.g., browsing or clicking records), named as CF-IF, is demonstrated to be an effective way in recommender systems. Existing works of CF-IF can be mainly classified into two categories, i.e., point-wise regression based and pairwise ranking based, where the latter one relaxes assumption and usually obtains better performance in empirical studies. In real applications, implicit feedback is often very sparse, causing CF-IF based methods to degrade significantly in recommendation performance. In this case, side information (e.g., item content) is usually introduced and utilized to address the data sparsity problem. Nevertheless, the latent feature representation learned from side information by topic model may not be very effective when the data is too sparse. To address this problem, we propose collaborative deep ranking (CDR), a hybrid pair-wise approach with implicit feedback, which leverages deep feature representation of item content into Bayesian framework of pair-wise ranking model in this paper. The experimental analysis on a real-world dataset shows CDR outperforms three state-of-art methods in terms of recall metric under different sparsity level.",
"title": ""
}
] | [
{
"docid": "neg:1840187_0",
"text": "It is logical that the requirement for antioxidant nutrients depends on a person's exposure to endogenous and exogenous reactive oxygen species. Since cigarette smoking results in an increased cumulative exposure to reactive oxygen species from both sources, it would seem cigarette smokers would have an increased requirement for antioxidant nutrients. Logic dictates that a diet high in antioxidant-rich foods such as fruits, vegetables, and spices would be both protective and a prudent preventive strategy for smokers. This review examines available evidence of fruit and vegetable intake, and supplementation of antioxidant compounds by smokers in an attempt to make more appropriate nutritional recommendations to this population.",
"title": ""
},
{
"docid": "neg:1840187_1",
"text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.",
"title": ""
},
{
"docid": "neg:1840187_2",
"text": "To identify the effects of core stabilization exercise on the Cobb angle and lumbar muscle strength of adolescent patients with idiopathic scoliosis. Subjects in the present study consisted of primary school students who were confirmed to have scoliosis on radiologic examination performed during their visit to the National Fitness Center in Seoul, Korea. Depending on whether they participated in a 12-week core stabilization exercise program, subjects were divided into the exercise (n=14, age 12.71±0.72 years) or control (n=15, age 12.80±0.86 years) group. The exercise group participated in three sessions of core stabilization exercise per week for 12 weeks. The Cobb angle, flexibility, and lumbar muscle strength tests were performed before and after core stabilization exercise. Repeated-measure two-way analysis of variance was performed to compare the treatment effects between the exercise and control groups. There was no significant difference in thoracic Cobb angle between the groups. The exercise group had a significant decrease in the lumbar Cobb angle after exercise compared to before exercise (P<0.001). The exercise group also had a significant increase in lumbar flexor and extensor muscles strength after exercise compared to before exercise (P<0.01 and P<0.001, respectively). Core stabilization exercise can be an effective therapeutic exercise to decrease the Cobb angle and improve lumbar muscle strength in adolescents with idiopathic scoliosis.",
"title": ""
},
{
"docid": "neg:1840187_3",
"text": "Registering a 3D facial model to a 2D image under occlusion is difficult. First, not all of the detected facial landmarks are accurate under occlusions. Second, the number of reliable landmarks may not be enough to constrain the problem. We propose a method to synthesize additional points (Sensible Points) to create pose hypotheses. The visual clues extracted from the fiducial points, non-fiducial points, and facial contour are jointly employed to verify the hypotheses. We define a reward function to measure whether the projected dense 3D model is well-aligned with the confidence maps generated by two fully convolutional networks, and use the function to train recurrent policy networks to move the Sensible Points. The same reward function is employed in testing to select the best hypothesis from a candidate pool of hypotheses. Experimentation demonstrates that the proposed approach is very promising in solving the facial model registration problem under occlusion.",
"title": ""
},
{
"docid": "neg:1840187_4",
"text": "Patterns of neural activity are systematically elicited as the brain experiences categorical stimuli and a major challenge is to understand what these patterns represent. Two influential approaches, hitherto treated as separate analyses, have targeted this problem by using model-representations of stimuli to interpret the corresponding neural activity patterns. Stimulus-model-based-encoding synthesizes neural activity patterns by first training weights to map between stimulus-model features and voxels. This allows novel model-stimuli to be mapped into voxel space, and hence the strength of the model to be assessed by comparing predicted against observed neural activity. Representational Similarity Analysis (RSA) assesses models by testing how well the grand structure of pattern-similarities measured between all pairs of model-stimuli aligns with the same structure computed from neural activity patterns. RSA does not require model fitting, but also does not allow synthesis of neural activity patterns, thereby limiting its applicability. We introduce a new approach, representational similarity-encoding, that builds on the strengths of RSA and robustly enables stimulus-model-based neural encoding without model fitting. The approach therefore sidesteps problems associated with overfitting that notoriously confront any approach requiring parameter estimation (and is consequently low cost computationally), and importantly enables encoding analyses to be incorporated within the wider Representational Similarity Analysis framework. We illustrate this new approach by using it to synthesize and decode fMRI patterns representing the meanings of words, and discuss its potential biological relevance to encoding in semantic memory. Our new similarity-based encoding approach unites the two previously disparate methods of encoding models and RSA, capturing the strengths of both, and enabling similarity-based synthesis of predicted fMRI patterns.",
"title": ""
},
{
"docid": "neg:1840187_5",
"text": "Event sequence, asynchronously generated with random timestamp, is ubiquitous among applications. The precise and arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally different from the time-series whereby series is indexed with fixed and equal time interval. One expressive mathematical tool for modeling event is point process. The intensity functions of many point processes involve two components: the background and the effect by the history. Due to its inherent spontaneousness, the background can be treated as a time series while the other need to handle the history events. In this paper, we model the background by a Recurrent Neural Network (RNN) with its units aligned with time series indexes while the history effect is modeled by another RNN whose units are aligned with asynchronous events to capture the long-range dynamics. The whole model with event type and timestamp prediction output layers can be trained end-to-end. Our approach takes an RNN perspective to point process, and models its background and history effect. For utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form in point processes. Meanwhile end-to-end training opens the venue for reusing existing rich techniques in deep network for point process modeling. We apply our model to the predictive maintenance problem using a log dataset by more than 1000 ATMs from a global bank headquartered in North America.",
"title": ""
},
{
"docid": "neg:1840187_6",
"text": "Cocitation and co-word methods have long been used to detect and track emerging topics in scientific literature, but both have weaknesses. Recently, while many researchers have adopted generative probabilistic models for topic detection and tracking, few have compared generative probabilistic models with traditional cocitation and co-word methods in terms of their overall performance. In this article, we compare the performance of hierarchical Dirichlet process (HDP), a promising generative probabilistic model, with that of the 2 traditional topic detecting and tracking methods— cocitation analysis and co-word analysis. We visualize and explore the relationships between topics identified by the 3 methods in hierarchical edge bundling graphs and time flow graphs. Our result shows that HDP is more sensitive and reliable than the other 2 methods in both detecting and tracking emerging topics. Furthermore, we demonstrate the important topics and topic evolution trends in the literature of terrorism research with the HDP method.",
"title": ""
},
{
"docid": "neg:1840187_7",
"text": "This paper presents a Model Reference Adaptive System (MRAS) based speed sensorless estimation of vector controlled Induction Motor Drive. MRAS based techniques are one of the best methods to estimate the rotor speed due to its performance and straightforward stability approach. Depending on the type of tuning signal driving the adaptation mechanism, MRAS estimators are classified into rotor flux based MRAS, back e.m.f based MRAS, reactive power based MRAS and artificial neural network based MRAS. In this paper, the performance of the rotor flux based MRAS for estimating the rotor speed was studied. Overview on the IM mathematical model is briefly summarized to establish a physical basis for the sensorless scheme used. Further, the theoretical basis of indirect field oriented vector control is explained in detail and it is implemented in MATLAB/SIMULINK.",
"title": ""
},
{
"docid": "neg:1840187_8",
"text": "Healthcare applications are considered as promising fields for wireless sensor networks, where patients can be monitored using wireless medical sensor networks (WMSNs). Current WMSN healthcare research trends focus on patient reliable communication, patient mobility, and energy-efficient routing, as a few examples. However, deploying new technologies in healthcare applications without considering security makes patient privacy vulnerable. Moreover, the physiological data of an individual are highly sensitive. Therefore, security is a paramount requirement of healthcare applications, especially in the case of patient privacy, if the patient has an embarrassing disease. This paper discusses the security and privacy issues in healthcare application using WMSNs. We highlight some popular healthcare projects using wireless medical sensor networks, and discuss their security. Our aim is to instigate discussion on these critical issues since the success of healthcare application depends directly on patient security and privacy, for ethic as well as legal reasons. In addition, we discuss the issues with existing security mechanisms, and sketch out the important security requirements for such applications. In addition, the paper reviews existing schemes that have been recently proposed to provide security solutions in wireless healthcare scenarios. Finally, the paper ends up with a summary of open security research issues that need to be explored for future healthcare applications using WMSNs.",
"title": ""
},
{
"docid": "neg:1840187_9",
"text": "Estimating surface normals is an important task in computer vision, e.g. in surface reconstruction, registration and object detection. In stereo vision, the error of depth reconstruction increases quadratically with distance. This makes estimation of surface normals an especially demanding task. In this paper, we analyze how error propagates from noisy disparity data to the orientation of the estimated surface normal. Firstly, we derive a transformation for normals between disparity space and world coordinates. Afterwards, the propagation of disparity noise is analyzed by means of a Monte Carlo method. Normal reconstruction at a pixel position requires to consider a certain neighborhood of the pixel. The extent of this neighborhood affects the reconstruction error. Our method allows to determine the optimal neighborhood size required to achieve a pre specified deviation of the angular reconstruction error, defined by a confidence interval. We show that the reconstruction error only depends on the distance of the surface point to the camera, the pixel distance to the principal point in the image plane and the angle at which the viewing ray intersects the surface.",
"title": ""
},
{
"docid": "neg:1840187_10",
"text": "We present a novel approach to still image denoising based on e ective filtering in 3D transform domain by combining sliding-window transform processing with block-matching. We process blocks within the image in a sliding manner and utilize the block-matching concept by searching for blocks which are similar to the currently processed one. The matched blocks are stacked together to form a 3D array and due to the similarity between them, the data in the array exhibit high level of correlation. We exploit this correlation by applying a 3D decorrelating unitary transform and e ectively attenuate the noise by shrinkage of the transform coe cients. The subsequent inverse 3D transform yields estimates of all matched blocks. After repeating this procedure for all image blocks in sliding manner, the final estimate is computed as weighed average of all overlapping blockestimates. A fast and e cient algorithm implementing the proposed approach is developed. The experimental results show that the proposed method delivers state-of-art denoising performance, both in terms of objective criteria and visual quality.",
"title": ""
},
{
"docid": "neg:1840187_11",
"text": "Extracorporeal photopheresis (ECP) is a technique that was developed > 20 years ago to treat erythrodermic cutaneous T-cell lymphoma (CTCL). The technique involves removal of peripheral blood, separation of the buffy coat, and photoactivation with a photosensitizer and ultraviolet A irradiation before re-infusion of cells. More than 1000 patients with CTCL have been treated with ECP, with response rates of 31-100%. ECP has been used in a number of other conditions, most widely in the treatment of chronic graft-versus-host disease (cGvHD) with response rates of 29-100%. ECP has also been used in several other autoimmune diseases including acute GVHD, solid organ transplant rejection and Crohn's disease, with some success. ECP is a relatively safe procedure, and side-effects are typically mild and transient. Severe reactions including vasovagal syncope or infections are uncommon. This is very valuable in conditions for which alternative treatments are highly toxic. The mechanism of action of ECP remains elusive. ECP produces a number of immunological changes and in some patients produces immune homeostasis with resultant clinical improvement. ECP is available in seven centres in the UK. Experts from all these centres formed an Expert Photopheresis Group and published the UK consensus statement for ECP in 2008. All centres consider patients with erythrodermic CTCL and steroid-refractory cGvHD for treatment. The National Institute for Health and Clinical Excellence endorsed the use of ECP for CTCL and suggested a need for expansion while recommending its use in specialist centres. ECP is safe, effective, and improves quality of life in erythrodermic CTCL and cGvHD, and should be more widely available for these patients.",
"title": ""
},
{
"docid": "neg:1840187_12",
"text": "In this paper, we present a study on the characterization and the classification of textures. This study is performed using a set of values obtained by the computation of indexes. To obtain these indexes, we extract a set of data with two techniques: the computation of matrices which are statistical representations of the texture and the computation of \"measures\". These matrices and measures are subsequently used as parameters of a function bringing real or discrete values which give information about texture features. A model of texture characterization is built based on this numerical information, for example to classify textures. An application is proposed to classify cells nuclei in order to diagnose patients affected by the Progeria disease.",
"title": ""
},
{
"docid": "neg:1840187_13",
"text": "Adaptive beamforming methods are known to degrade if some of underlying assumptions on the environment, sources, or sensor array become violated. In particular, if the desired signal is present in training snapshots, the adaptive array performance may be quite sensitive even to slight mismatches between the presumed and actual signal steering vectors (spatial signatures). Such mismatches can occur as a result of environmental nonstationarities, look direction errors, imperfect array calibration, distorted antenna shape, as well as distortions caused by medium inhomogeneities, near–far mismatch, source spreading, and local scattering. The similar type of performance degradation can occur when the signal steering vector is known exactly but the training sample size is small. In this paper, we develop a new approach to robust adaptive beamforming in the presence of an arbitrary unknown signal steering vector mismatch. Our approach is based on the optimization of worst-case performance. It turns out that the natural formulation of this adaptive beamforming problem involves minimization of a quadratic function subject to infinitely many nonconvex quadratic constraints. We show that this (originally intractable) problem can be reformulated in a convex form as the so-called second-order cone (SOC) program and solved efficiently (in polynomial time) using the well-established interior point method. It is also shown that the proposed technique can be interpreted in terms of diagonal loading where the optimal value of the diagonal loading factor is computed based on the known level of uncertainty of the signal steering vector. Computer simulations with several frequently encountered types of signal steering vector mismatches show better performance of our robust beamformer as compared with existing adaptive beamforming algorithms.",
"title": ""
},
{
"docid": "neg:1840187_14",
"text": "As the use of Twitter has become more commonplace throughout many nations, its role in political discussion has also increased. This has been evident in contexts ranging from general political discussion through local, state, and national elections (such as in the 2010 Australian elections) to protests and other activist mobilisation (for example in the current uprisings in Tunisia, Egypt, and Yemen, as well as in the controversy around Wikileaks). Research into the use of Twitter in such political contexts has also developed rapidly, aided by substantial advancements in quantitative and qualitative methodologies for capturing, processing, analysing, and visualising Twitter updates by large groups of users. Recent work has especially highlighted the role of the Twitter hashtag – a short keyword, prefixed with the hash symbol ‘#’ – as a means of coordinating a distributed discussion between more or less large groups of users, who do not need to be connected through existing ‘follower’ networks. Twitter hashtags – such as ‘#ausvotes’ for the 2010 Australian elections, ‘#londonriots’ for the coordination of information and political debates around the recent unrest in London, or ‘#wikileaks’ for the controversy around Wikileaks thus aid the formation of ad hoc publics around specific themes and topics. They emerge from within the Twitter community – sometimes as a result of pre-planning or quickly reached consensus, sometimes through protracted debate about what the appropriate hashtag for an event or topic should be (which may also lead to the formation of competing publics using different hashtags). Drawing on innovative methodologies for the study of Twitter content, this paper examines the use of hashtags in political debate in the context of a number of major case studies.",
"title": ""
},
{
"docid": "neg:1840187_15",
"text": "In recent years, prison officials have increasingly turned to solitary confinement as a way to manage difficult or dangerous prisoners. Many of the prisoners subjected to isolation, which can extend for years, have serious mental illness, and the conditions of solitary confinement can exacerbate their symptoms or provoke recurrence. Prison rules for isolated prisoners, however, greatly restrict the nature and quantity of mental health services that they can receive. In this article, we describe the use of isolation (called segregation by prison officials) to confine prisoners with serious mental illness, the psychological consequences of such confinement, and the response of U.S. courts and human rights experts. We then address the challenges and human rights responsibilities of physicians confronting this prison practice. We conclude by urging professional organizations to adopt formal positions against the prolonged isolation of prisoners with serious mental illness.",
"title": ""
},
{
"docid": "neg:1840187_16",
"text": "Kernel modules are an integral part of most operating systems (OS) as they provide flexible ways of adding new functionalities (such as file system or hardware support) to the kernel without the need to recompile or reload the entire kernel. Aside from providing an interface between the user and the hardware, these modules maintain system security and reliability. Malicious kernel level exploits (e.g. code injections) provide a gateway to a system's privileged level where the attacker has access to an entire system. Such attacks may be detected by performing code integrity checks. Several commodity operating systems (such as Linux variants and MS Windows) maintain signatures of different pieces of kernel code in a database for code integrity checking purposes. However, it quickly becomes cumbersome and time consuming to maintain a database of legitimate dynamic changes in the code, such as regular module updates. In this paper we present Mod Checker, which checks in-memory kernel modules' code integrity in real time without maintaining a database of hashes. Our solution applies to virtual environments that have multiple virtual machines (VMs) running the same version of the operating system, an environment commonly found in large cloud servers. Mod Checker compares kernel module among a pool of VMs within a cloud. We thoroughly evaluate the effectiveness and runtime performance of Mod Checker and conclude that Mod Checker is able to detect any change in a kernel module's headers and executable content with minimal or no impact on the guest operating systems' performance.",
"title": ""
},
{
"docid": "neg:1840187_17",
"text": "In this paper, we propose a novel text representation paradigm and a set of follow-up text representation models based on cognitive psychology theories. The intuition of our study is that the knowledge implied in a large collection of documents may improve the understanding of single documents. Based on cognitive psychology theories, we propose a general text enrichment framework, study the key factors to enable activation of implicit information, and develop new text representation methods to enrich text with the implicit information. Our study aims to mimic some aspects of human cognitive procedure in which given stimulant words serve to activate understanding implicit concepts. By incorporating human cognition into text representation, the proposed models advance existing studies by mining implicit information from given text and coordinating with most existing text representation approaches at the same time, which essentially bridges the gap between explicit and implicit information. Experiments on multiple tasks show that the implicit information activated by our proposed models matches human intuition and significantly improves the performance of the text mining tasks as well.",
"title": ""
},
{
"docid": "neg:1840187_18",
"text": "Optimal facility layout is of the factors that can affect the efficiency of any organization and annually millions of dollars costs are created for it or profits are saved. Studies done in the field of facility layout can be classified into two general categories: Facility layout issues in static position and facility layout issues in dynamic position. Because that facilities dynamic layout problem is realistic this paper investigates it and tries to consider all necessary aspects of this issue to become it more practical. In this regard, this research has developed three objectives model and tries to simultaneously minimize total operating costs and also production time. Since the calculation of production time using analytical relations is impossible, this research using simulation and regression analysis of a statistical correlation tries to measure production time. So the developed model is a combination of analytical and statistical relationships. The proposed model is of NP-HARD issues, so that even finding an optimal solution for its small scale is very difficult and time consuming. Multi-objective meta-heuristic NSGAII and NRGA algorithms are used to solve the problem. Since the outputs of meta-heuristic algorithms are highly dependent on the algorithms input parameters, Taguchi experimental design method is also used to set parameters. Alsoin order to assess the efficiency of provided procedures, the proposed method has been analyzed on generated pilot issues with various aspects. The results of the comparing algorithms on several criteria, consistently show the superiority of the NSGA-II than NRGA in problem solving.",
"title": ""
}
] |
1840188 | Differential privacy and robust statistics | [
{
"docid": "pos:1840188_0",
"text": "We study the role that privacy-preserving algorithms, which prevent the leakage of specific information about participants, can play in the design of mechanisms for strategic agents, which must encourage players to honestly report information. Specifically, we show that the recent notion of differential privacv, in addition to its own intrinsic virtue, can ensure that participants have limited effect on the outcome of the mechanism, and as a consequence have limited incentive to lie. More precisely, mechanisms with differential privacy are approximate dominant strategy under arbitrary player utility functions, are automatically resilient to coalitions, and easily allow repeatability. We study several special cases of the unlimited supply auction problem, providing new results for digital goods auctions, attribute auctions, and auctions with arbitrary structural constraints on the prices. As an important prelude to developing a privacy-preserving auction mechanism, we introduce and study a generalization of previous privacy work that accommodates the high sensitivity of the auction setting, where a single participant may dramatically alter the optimal fixed price, and a slight change in the offered price may take the revenue from optimal to zero.",
"title": ""
}
] | [
{
"docid": "neg:1840188_0",
"text": "In adaptive learning systems for distance learning attention is focused on adjusting the learning material to the needs of the individual. Adaptive tests adjust to the current level of knowledge of the examinee and is specific for their needs, thus it is much better at evaluating the knowledge of each individual. The basic goal of adaptive computer tests is to ensure the examinee questions that are challenging enough for them but not too difficult, which would lead to frustration and confusion. The aim of this paper is to present a computer adaptive test (CAT) realized in MATLAB.",
"title": ""
},
{
"docid": "neg:1840188_1",
"text": "This paper presents algorithms and techniques for single-sensor tracking and multi-sensor fusion of infrared and radar data. The results show that fusing radar data with infrared data considerably increases detection range, reliability and accuracy of the object tracking. This is mandatory for further development of driver assistance systems. Using multiple model filtering for sensor fusion applications helps to capture the dynamics of maneuvering objects while still achieving smooth object tracking for not maneuvering objects. This is important when safety and comfort systems have to make use of the same sensor information. Comfort systems generally require smoothly filtered data whereas for safety systems it is crucial to capture maneuvers of other road users as fast as possible. Multiple model filtering and probabilistic data association techniques are presented and all presented algorithms are tested in real-time on standard PC systems.",
"title": ""
},
{
"docid": "neg:1840188_2",
"text": "As more companies embrace the concepts of sustainable development, there is a need to bring the ideas inherent in eco-efficiency and the \" triple-bottom line \" thinking down to a practical implementation level. Putting this concept into operation requires an understanding of the key indicators of sustainability and how they can be measured to determine if, in fact, progress is being made. Sustainability metrics are intended as simple yardsticks that are applicable across industry. The primary objective of this approach is to improve internal management decision-making with respect to the sustainability of processes, products and services. This approach can be used to make better decisions at any stage of the stage-gate process: from identification of an innovation to design to manufacturing and ultimately to exiting a business. More specifically, sustainability metrics can assist decision makers in setting goals, benchmarking, and comparing alternatives such as different suppliers, raw materials, and improvement options from the sustainability perspective. This paper provides a review on the early efforts and recent progress in the development of sustainability metrics. The experience of BRIDGES to Sustainability™, a not-for-profit organization, in testing, adapting, and refining the sustainability metrics are summarized. Basic and complementary metrics under six impact categories: material, energy, water, solid wastes, toxic release, and pollutant effects, are discussed. The development of BRIDGESworks™ Metrics, a metrics management software tool, is also presented. The software was designed to be both easy to use and flexible. It incorporates a base set of metrics and their heuristics for calculation, as well as a robust set of impact assessment data for use in identifying pollutant effects. While providing a metrics management starting point, the user has the option of creating other metrics defined by the user. The sustainability metrics work at BRIDGES to Sustainability™ was funded partially by the U.S. Department of Energy through a subcontract with the American Institute of Chemical Engineers and through corporate pilots.",
"title": ""
},
{
"docid": "neg:1840188_3",
"text": "An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging design problems. However, due to the computational cost of these high fidelity simulations, the use of neural networks, kernel methods, and other surrogate modeling techniques have become indispensable. Surrogate models are compact and cheap to evaluate, and have proven very useful for tasks such as optimization, design space exploration, prototyping, and sensitivity analysis. Consequently, in many fields there is great interest in tools and techniques that facilitate the construction of such regression models, while minimizing the computational cost and maximizing model accuracy. This paper presents a mature, flexible, and adaptive machine learning toolkit for regression modeling and active learning to tackle these issues. The toolkit brings together algorithms for data fitting, model selection, sample selection (active learning), hyperparameter optimization, and distributed computing in order to empower a domain expert to efficiently generate an accurate model for the problem or data at hand.",
"title": ""
},
{
"docid": "neg:1840188_4",
"text": "We address numerical versus experimental design and testing of miniature implantable antennas for biomedical telemetry in the medical implant communications service band (402-405 MHz). A model of a novel miniature antenna is initially proposed for skin implantation, which includes varying parameters to deal with fabrication-specific details. An iterative design-and-testing methodology is further suggested to determine the parameter values that minimize deviations between numerical and experimental results. To assist in vitro testing, a low-cost technique is proposed for reliably measuring the electric properties of liquids without requiring commercial equipment. Validation is performed within a specific prototype fabrication/testing approach for miniature antennas. To speed up design while providing an antenna for generic skin implantation, investigations are performed inside a canonical skin-tissue model. Resonance, radiation, and safety performance of the proposed antenna is finally evaluated inside an anatomical head model. This study provides valuable insight into the design of implantable antennas, assessing the significance of fabrication-specific details in numerical simulations and uncertainties in experimental testing for miniature structures. The proposed methodology can be applied to optimize antennas for several fabrication/testing approaches and biotelemetry applications.",
"title": ""
},
{
"docid": "neg:1840188_5",
"text": "The I-V curves for Schottky diodes with two different contact areas and geometries fabricated through 1.2 μm CMOS process are presented. These curves are described applying the analysis and practical layout design. It takes into account the resistance, capacitance and reverse breakdown voltage in the semiconductor structure and the dependence of these parameters to improve its operation. The described diodes are used for a charge pump circuit implementation.",
"title": ""
},
{
"docid": "neg:1840188_6",
"text": "Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients.",
"title": ""
},
{
"docid": "neg:1840188_7",
"text": "We have been developing ldquoSmart Suitrdquo as a soft and light-weight wearable power assist system. A prototype for preventing low-back injury in agricultural works and its semi-active assist mechanism have been developed in the previous study. The previous prototype succeeded to reduce about 14% of average muscle fatigues of body trunk in waist extension/flexion motion. In this paper, we describe a prototype of smart suit for supporting waist and knee joint, and its control method for preventing the displacement of the adjustable assist force mechanism in order to keep the assist efficiency.",
"title": ""
},
{
"docid": "neg:1840188_8",
"text": "The pervasiveness of Web 2.0 and social networking sites has enabled people to interact with each other easily through various social media. For instance, popular sites like Del.icio.us, Flickr, and YouTube allow users to comment on shared content (bookmarks, photos, videos), and users can tag their favorite content. Users can also connect with one another, and subscribe to or become a fan or a follower of others. These diverse activities result in a multi-dimensional network among actors, forming group structures with group members sharing similar interests or affiliations. This work systematically addresses two challenges. First, it is challenging to effectively integrate interactions over multiple dimensions to discover hidden community structures shared by heterogeneous interactions. We show that representative community detection methods for single-dimensional networks can be presented in a unified view. Based on this unified view, we present and analyze four possible integration strategies to extend community detection from single-dimensional to multi-dimensional networks. In particular, we propose a novel integration scheme based on structural features. Another challenge is the evaluation of different methods without ground truth information about community membership. We employ a novel cross-dimension network validation procedure to compare the performance of different methods. We use synthetic data to deepen our understanding, and real-world data to compare integration strategies as well as baseline methods in a large scale. We study further the computational time of different methods, normalization effect during integration, sensitivity to related parameters, and alternative community detection methods for integration. Lei Tang, Xufei Wang, Huan Liu Computer Science and Engineering, Arizona State University, Tempe, AZ 85287, USA E-mail: {L.Tang, Xufei.Wang, Huan.Liu@asu.edu} Report Documentation Page Form Approved OMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 2010 2. REPORT TYPE 3. DATES COVERED 00-00-2010 to 00-00-2010 4. TITLE AND SUBTITLE Community Detection in Multi-Dimensional Networks 5a. CONTRACT NUMBER",
"title": ""
},
{
"docid": "neg:1840188_9",
"text": "This paper demonstrates the possibility and feasibility of an ultralow-cost antenna-in-package (AiP) solution for the upcoming generation of wireless local area networks (WLANs) denoted as IEEE802.11ad. The iterative design procedure focuses on maximally alleviating the inherent disadvantages of high-volume FR4 process at 60 GHz such as its relatively high material loss and fabrication restrictions. Within the planar antenna package, the antenna element, vertical transition, antenna feedline, and low- and high-speed interfaces are allocated in a vertical schematic. A circular stacked patch antenna renders the antenna package to exhibit 10-dB return loss bandwidth from 57-66 GHz. An embedded coplanar waveguide (CPW) topology is adopted for the antenna feedline and features less than 0.24 dB/mm in unit loss, which is extracted from measured parametric studies. The fabricated single antenna package is 9 mm × 6 mm × 0.404 mm in dimension. A multiple-element antenna package is fabricated, and its feasibility for future phase array applications is studied. Far-field radiation measurement using an inhouse radio-frequency (RF) probe station validates the single-antenna package to exhibit more than 4.1-dBi gain and 76% radiation efficiency.",
"title": ""
},
{
"docid": "neg:1840188_10",
"text": "PAST research has shown that real-time Twitter data can be used to predict market movement of securities and other financial instruments [1]. The goal of this paper is to prove whether Twitter data relating to cryptocurrencies can be utilized to develop advantageous crypto coin trading strategies. By way of supervised machine learning techniques, our team will outline several machine learning pipelines with the objective of identifying cryptocurrency market movement. The prominent alternative currency examined in this paper is Bitcoin (BTC). Our approach to cleaning data and applying supervised learning algorithms such as logistic regression, Naive Bayes, and support vector machines leads to a final hour-to-hour and day-to-day prediction accuracy exceeding 90%. In order to achieve this result, rigorous error analysis is employed in order to ensure that accurate inputs are utilized at each step of the model. This analysis yields a 25% accuracy increase on average.",
"title": ""
},
{
"docid": "neg:1840188_11",
"text": "With the rapidly growing scales of statistical problems, subset based communicationfree parallel MCMC methods are a promising future for large scale Bayesian analysis. In this article, we propose a new Weierstrass sampler for parallel MCMC based on independent subsets. The new sampler approximates the full data posterior samples via combining the posterior draws from independent subset MCMC chains, and thus enjoys a higher computational efficiency. We show that the approximation error for the Weierstrass sampler is bounded by some tuning parameters and provide suggestions for choice of the values. Simulation study shows the Weierstrass sampler is very competitive compared to other methods for combining MCMC chains generated for subsets, including averaging and kernel smoothing.",
"title": ""
},
{
"docid": "neg:1840188_12",
"text": "We present the exploring/exploiting tree (EET) algorithm for motion planning. The EET planner deliberately trades probabilistic completeness for computational efficiency. This tradeoff enables the EET planner to outperform state-of-the-art sampling-based planners by up to three orders of magnitude. We show that these considerable speedups apply for a variety of challenging real-world motion planning problems. The performance improvements are achieved by leveraging work space information to continuously adjust the sampling behavior of the planner. When the available information captures the planning problem's inherent structure, the planner's sampler becomes increasingly exploitative. When the available information is less accurate, the planner automatically compensates by increasing local configuration space exploration. We show that active balancing of exploration and exploitation based on workspace information can be a key ingredient to enabling highly efficient motion planning in practical scenarios.",
"title": ""
},
{
"docid": "neg:1840188_13",
"text": "Graphene patch microstrip antenna has been investigated for 600 GHz applications. The graphene material introduces a reconfigurable surface conductivity in terahertz frequency band. The input impedance is calculated using the finite integral technique. A five-lumped elements equivalent circuit for graphene patch microstrip antenna has been investigated. The values of the lumped elements equivalent circuit are optimized using the particle swarm optimization techniques. The optimization is performed to minimize the mean square error between the input impedance of the finite integral technique and that calculated by the equivalent circuit model. The effect of varying the graphene material chemical potential and relaxation time on the radiation characteristics of the graphene patch microstrip antenna has been investigated. An improved new equivalent circuit model has been introduced to best fitting the input impedance using a rational function and PSO. The Cauer's realization method is used to synthesize a new lumped-elements equivalent circuits.",
"title": ""
},
{
"docid": "neg:1840188_14",
"text": "As organizations aggressively deploy radio frequency identification systems, activists are increasingly concerned about RFID's potential to invade user privacy. This overview highlights potential threats and how they might be addressed using both technology and public policy.",
"title": ""
},
{
"docid": "neg:1840188_15",
"text": "We describe the design, implementation, and evaluation of EMBERS, an automated, 24x7 continuous system for forecasting civil unrest across 10 countries of Latin America using open source indicators such as tweets, news sources, blogs, economic indicators, and other data sources. Unlike retrospective studies, EMBERS has been making forecasts into the future since Nov 2012 which have been (and continue to be) evaluated by an independent T&E team (MITRE). Of note, EMBERS has successfully forecast the June 2013 protests in Brazil and Feb 2014 violent protests in Venezuela. We outline the system architecture of EMBERS, individual models that leverage specific data sources, and a fusion and suppression engine that supports trading off specific evaluation criteria. EMBERS also provides an audit trail interface that enables the investigation of why specific predictions were made along with the data utilized for forecasting. Through numerous evaluations, we demonstrate the superiority of EMBERS over baserate methods and its capability to forecast significant societal happenings.",
"title": ""
},
{
"docid": "neg:1840188_16",
"text": "PDDL+ is an extension of PDDL that enables modelling planning domains with mixed discrete-continuous dynamics. In this paper we present a new approach to PDDL+ planning based on Constraint Answer Set Programming (CASP), i.e. ASP rules plus numerical constraints. To the best of our knowledge, ours is the first attempt to link PDDL+ planning and logic programming. We provide an encoding of PDDL+ models into CASP problems. The encoding can handle non-linear hybrid domains, and represents a solid basis for applying logic programming to PDDL+ planning. As a case study, we consider the EZCSP CASP solver and obtain promising results on a set of PDDL+ benchmark problems.",
"title": ""
},
{
"docid": "neg:1840188_17",
"text": "In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.",
"title": ""
},
{
"docid": "neg:1840188_18",
"text": "specifications of the essential structure of a system. Models in the analysis or preliminary design stages focus on the key concepts and mechanisms of the eventual system. They correspond in certain ways with the final system. But details are missing from the model, which must be added explicitly during the design process. The purpose of the abstract models is to get the high-level pervasive issues correct before tackling the more localized details. These models are intended to be evolved into the final models by a careful process that guarantees that the final system correctly implements the intent of the earlier models. There must be traceability from these essential models to the full models; otherwise, there is no assurance that the final system correctly incorporates the key properties that the essential model sought to show. Essential models focus on semantic intent. They do not need the full range of implementation options. Indeed, low-level performance distinctions often obscure the logical semantics. The path from an essential model to a complete implementation model must be clear and straightforward, however, whether it is generated automatically by a code generator or evolved manually by a designer. Full specifications of a final system. An implementation model includes enough information to build the system. It must include not only the logical semantics of the system and the algorithms, data structures, and mechanisms that ensure proper performance, but also organizational decisions about the system artifacts that are necessary for cooperative work by humans and processing by tools. This kind of model must include constructs for packaging the model for human understanding and for computer convenience. These are not properties of the target application itself. Rather, they are properties of the construction process. Exemplars of typical or possible systems. Well-chosen examples can give insight to humans and can validate system specifications and implementations. Even a large Chapter 2 • The Nature and Purpose of Models 17 collection of examples, however, necessarily falls short of a definitive description. Ultimately, we need models that specify the general case; that is what a program is, after all. Examples of typical data structures, interaction sequences, or object histories can help a human trying to understand a complicated situation, however. Examples must be used with some care. It is logically impossible to induce the general case from a set of examples, but well-chosen prototypes are the way most people think. An example model includes instances rather than general descriptors. It therefore tends to have a different feel than a generic descriptive model. Example models usually use only a subset of the UML constructs, those that deal with instances. Both descriptive models and exemplar models are useful in modeling a system. Complete or partial descriptions of systems. A model can be a complete description of a single system with no outside references. More often, it is organized as a set of distinct, discrete units, each of which may be stored and manipulated separately as a part of the entire description. Such models have “loose ends” that must be bound to other models in a complete system. Because the pieces have coherence and meaning, they can be combined with other pieces in various ways to produce many different systems. Achieving reuse is an important goal of good modeling. Models evolve over time. Models with greater degrees of detail are derived from more abstract models, and more concrete models are derived from more logical models. For example, a model might start as a high-level view of the entire system, with a few key services in brief detail and no embellishments. Over time, much more detail is added and variations are introduced. Also over time, the focus shifts from a front-end, user-centered logical view to a back-end, implementationcentered physical view. As the developers work with a system and understand it better, the model must be iterated at all levels to capture that understanding; it is impossible to understand a large system in a single, linear pass. There is no one “right” form for a model.",
"title": ""
},
{
"docid": "neg:1840188_19",
"text": "So far, plant identification has challenges for sev eral researchers. Various methods and features have been proposed. However, there are still many approaches could be investigated to develop robust plant identification systems. This paper reports several xperiments in using Zernike moments to build folia ge plant identification systems. In this case, Zernike moments were combined with other features: geometr ic features, color moments and gray-level co-occurrenc e matrix (GLCM). To implement the identifications systems, two approaches has been investigated. Firs t approach used a distance measure and the second u sed Probabilistic Neural Networks (PNN). The results sh ow that Zernike Moments have a prospect as features in leaf identification systems when they are combin ed with other features.",
"title": ""
}
] |
1840189 | Robust discrete optimization and network flows | [
{
"docid": "pos:1840189_0",
"text": "We treat in this paper Linear Programming (LP) problems with uncertain data. The focus is on uncertainty associated with hard constraints: those which must be satisfied, whatever is the actual realization of the data (within a prescribed uncertainty set). We suggest a modeling methodology whereas an uncertain LP is replaced by its Robust Counterpart (RC). We then develop the analytical and computational optimization tools to obtain robust solutions of an uncertain LP problem via solving the corresponding explicitly stated convex RC program. In particular, it is shown that the RC of an LP with ellipsoidal uncertainty set is computationally tractable, since it leads to a conic quadratic program, which can be solved in polynomial time.",
"title": ""
},
{
"docid": "pos:1840189_1",
"text": "A robust approach to solving linear optimization problems with uncertain data was proposed in the early 1970s and has recently been extensively studied and extended. Under this approach, we are willing to accept a suboptimal solution for the nominal values of the data in order to ensure that the solution remains feasible and near optimal when the data changes. A concern with such an approach is that it might be too conservative. In this paper, we propose an approach that attempts to make this trade-off more attractive; that is, we investigate ways to decrease what we call the price of robustness. In particular, we flexibly adjust the level of conservatism of the robust solutions in terms of probabilistic bounds of constraint violations. An attractive aspect of our method is that the new robust formulation is also a linear optimization problem. Thus we naturally extend our methods to discrete optimization problems in a tractable way. We report numerical results for a portfolio optimization problem, a knapsack problem, and a problem from the Net Lib library.",
"title": ""
}
] | [
{
"docid": "neg:1840189_0",
"text": "There appear to be no brain imaging studies investigating which brain mechanisms subserve affective, impulsive violence versus planned, predatory violence. It was hypothesized that affectively violent offenders would have lower prefrontal activity, higher subcortical activity, and reduced prefrontal/subcortical ratios relative to controls, while predatory violent offenders would show relatively normal brain functioning. Glucose metabolism was assessed using positron emission tomography in 41 comparisons, 15 predatory murderers, and nine affective murderers in left and right hemisphere prefrontal (medial and lateral) and subcortical (amygdala, midbrain, hippocampus, and thalamus) regions. Affective murderers relative to comparisons had lower left and right prefrontal functioning, higher right hemisphere subcortical functioning, and lower right hemisphere prefrontal/subcortical ratios. In contrast, predatory murderers had prefrontal functioning that was more equivalent to comparisons, while also having excessively high right subcortical activity. Results support the hypothesis that emotional, unplanned impulsive murderers are less able to regulate and control aggressive impulses generated from subcortical structures due to deficient prefrontal regulation. It is hypothesized that excessive subcortical activity predisposes to aggressive behaviour, but that while predatory murderers have sufficiently good prefrontal functioning to regulate these aggressive impulses, the affective murderers lack such prefrontal control over emotion regulation.",
"title": ""
},
{
"docid": "neg:1840189_1",
"text": "This paper presents the proposed design of high power and high efficiency inverter for wireless power transfer systems operating at 13.56 MHz using multiphase resonant inverter and GaN HEMT devices. The high efficiency and the stable of inverter are the main targets of the design. The module design, the power loss analysis and the drive circuit design have been addressed. In experiment, a 3 kW inverter with the efficiency of 96.1% is achieved that significantly improves the efficiency of 13.56 MHz inverter. In near future, a 10 kW inverter with the efficiency of over 95% can be realizable by following this design concept.",
"title": ""
},
{
"docid": "neg:1840189_2",
"text": "Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation, and expectations. The organism organizes itself in relation to the challenge. We propose to call this the \"preparatory set\" (PS). We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of body-mind therapeutic and educational systems (BTES) can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.",
"title": ""
},
{
"docid": "neg:1840189_3",
"text": "With the rapid development of web technology, an increasing number of enterprises having been seeking for a method to facilitate business decision making process, power the bottom line, and achieve a fully coordinated organization, called business intelligence (BI). Unfortunately, traditional BI tends to be unmanageable, risky and prohibitively expensive, especially for Small and Medium Enterprises (SMEs). The emergence of cloud computing and Software as a Service (SaaS) provides a cost effective solution. Recently, business intelligence applications delivered via SaaS, termed as Business Intelligence as a Service (SaaS BI), has proved to be the next generation in BI market. However, since SaaS BI just comes in its infant stage, a general framework maybe poorly considered. Therefore, in this paper we proposed a general conceptual framework for SaaS BI, and presented several possible future directions of SaaS BI.",
"title": ""
},
{
"docid": "neg:1840189_4",
"text": "In this paper, we present a design of RTE template structure for AUTOSAR-based vehicle applications. Due to an increase in software complexity in recent years, much greater efforts are necessary to manage and develop software modules in automotive industries. To deal with this issue, an automotive Open system architecture (AUTOSAR) partnership was launched. This embedded platform standardizes software architectures and provides a methodology supporting distributed process. The RTE that is located at the heart of AUTOSAR implements the virtual function bus functionality for a particle electronic control unit (ECU). It enables to communicate between application software components. The purpose of this paper is to design a RTE structure with the AUTOSAR standard concept. As a future work, this research will be further extended, and we will propose the development of a RTE generator that is drawn on an AUTOSAR RTE template structure.",
"title": ""
},
{
"docid": "neg:1840189_5",
"text": "Pairs trading is an important and challenging research area in computational finance, in which pairs of stocks are bought and sold in pair combinations for arbitrage opportunities. Traditional methods that solve this set of problems mostly rely on statistical methods such as regression. In contrast to the statistical approaches, recent advances in computational intelligence (CI) are leading to promising opportunities for solving problems in the financial applications more effectively. In this paper, we present a novel methodology for pairs trading using genetic algorithms (GA). Our results showed that the GA-based models are able to significantly outperform the benchmark and our proposed method is capable of generating robust models to tackle the dynamic characteristics in the financial application studied. Based upon the promising results obtained, we expect this GA-based method to advance the research in computational intelligence for finance and provide an effective solution to pairs trading for investment in practice.",
"title": ""
},
{
"docid": "neg:1840189_6",
"text": "Individuals with below-knee amputation have more difficulty balancing during walking, yet few studies have explored balance enhancement through active prosthesis control. We previously used a dynamical model to show that prosthetic ankle push-off work affects both sagittal and frontal plane dynamics, and that appropriate step-by-step control of push-off work can improve stability. We hypothesized that this approach could be applied to a robotic prosthesis to partially fulfill the active balance requirements of human walking, thereby reducing balance-related activity and associated effort for the person using the device. We conducted experiments on human participants (N = 10) with simulated amputation. Prosthetic ankle push-off work was varied on each step in ways expected to either stabilize, destabilize or have no effect on balance. Average ankle push-off work, known to affect effort, was kept constant across conditions. Stabilizing controllers commanded more push-off work on steps when the mediolateral velocity of the center of mass was lower than usual at the moment of contralateral heel strike. Destabilizing controllers enforced the opposite relationship, while a neutral controller maintained constant push-off work regardless of body state. A random disturbance to landing foot angle and a cognitive distraction task were applied, further challenging participants’ balance. We measured metabolic rate, foot placement kinematics, center of pressure kinematics, distraction task performance, and user preference in each condition. We expected the stabilizing controller to reduce active control of balance and balance-related effort for the user, improving user preference. The best stabilizing controller lowered metabolic rate by 5.5% (p = 0.003) and 8.5% (p = 0.02), and step width variability by 10.0% (p = 0.009) and 10.7% (p = 0.03) compared to conditions with no control and destabilizing control, respectively. Participants tended to prefer stabilizing controllers. These effects were not due to differences in average push-off work, which was unchanged across conditions, or to average gait mechanics, which were also unchanged. Instead, benefits were derived from step-by-step adjustments to prosthesis behavior in response to variations in mediolateral velocity at heel strike. Once-per-step control of prosthetic ankle push-off work can reduce both active control of foot placement and balance-related metabolic energy use during walking.",
"title": ""
},
{
"docid": "neg:1840189_7",
"text": "A power factor (PF) corrected single stage, two-switch isolated zeta converter is proposed for arc welding. This modified zeta converter is having two switches and two clamping diodes on the primary side of a high-frequency transformer. This, in turn, results in reduced switch stress. The proposed converter is designed to operate in a discontinuous inductor current mode (DICM) to achieve inherent PF correction at the utility. The DICM operation substantially reduces the complexity of the control and effectively regulates the output dc voltage. The proposed converter offers several features, such as inherent overload current limit and fast parametrical response, to the load and source voltage conditions. This, in turn, results in an improved performance in terms of power quality indices and an enhanced weld bead quality. The proposed modified zeta converter is designed and its performance is simulated in the MATLAB/Simulink environment. Simulated results are also verified experimentally on a developed prototype of the converter. The performance of the system is investigated in terms of its input PF, displacement PF, total harmonic distortion of ac mains current, voltage regulation, and robustness to prove its efficacy in overall performance.",
"title": ""
},
{
"docid": "neg:1840189_8",
"text": "Heterogeneous information network (HIN) has been widely adopted in recommender systems due to its excellence in modeling complex context information. Although existing HIN based recommendation methods have achieved performance improvement to some extent, they have two major shortcomings. First, these models seldom learn an explicit representation for path or meta-path in the recommendation task. Second, they do not consider the mutual effect between the meta-path and the involved user-item pair in an interaction. To address these issues, we develop a novel deep neural network with the co-attention mechanism for leveraging rich meta-path based context for top-N recommendation. We elaborately design a three-way neural interaction model by explicitly incorporating meta-path based context. To construct the meta-path based context, we propose to use a priority based sampling technique to select high-quality path instances. Our model is able to learn effective representations for users, items and meta-path based context for implementing a powerful interaction function. The co-attention mechanism improves the representations for meta-path based con- text, users and items in a mutual enhancement way. Extensive experiments on three real-world datasets have demonstrated the effectiveness of the proposed model. In particular, the proposed model performs well in the cold-start scenario and has potentially good interpretability for the recommendation results.",
"title": ""
},
{
"docid": "neg:1840189_9",
"text": "In this work, the author implemented a NOVEL technique of multiple input multiple output (MIMO) orthogonal frequency division multiplexing (OFDM) based on space frequency block coding (SF-BC). Where, the implemented code is designed based on the QOC using the techniques of the reconfigurable antennas. The proposed system is implemented using MATLAB program, and the results showing best performance of a wireless communications system of higher coding gain and diversity.",
"title": ""
},
{
"docid": "neg:1840189_10",
"text": "We present an optimistic primary-backup (so-called passive replication) mechanism for highly available Internet services on intercloud platforms. Our proposed method aims at providing Internet services despite the occurrence of a large-scale disaster. To this end, each service in our method creates replicas in different data centers and coordinates them with an optimistic consensus algorithm instead of a majority-based consensus algorithm such as Paxos. Although our method allows temporary inconsistencies among replicas, it eventually converges on the desired state without an interruption in services. In particular, the method tolerates simultaneous failure of the majority of nodes and a partitioning of the network. Moreover, through interservice communications, members of the service groups are autonomously reorganized according to the type of failure and changes in system load. This enables both load balancing and power savings, as well as provisioning for the next disaster. We demonstrate the service availability provided by our approach for simulated failure patterns and its adaptation to changes in workload for load balancing and power savings by experiments with a prototype implementation.",
"title": ""
},
{
"docid": "neg:1840189_11",
"text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.",
"title": ""
},
{
"docid": "neg:1840189_12",
"text": "We study RF-enabled wireless energy transfer (WET) via energy beamforming, from a multi-antenna energy transmitter (ET) to multiple energy receivers (ERs) in a backscatter communication system such as RFID. The acquisition of the forward-channel (i.e., ET-to-ER) state information (F-CSI) at the ET (or RFID reader) is challenging, since the ERs (or RFID tags) are typically too energy-and-hardware-constrained to estimate or feedback the F-CSI. The ET leverages its observed backscatter signals to estimate the backscatter-channel (i.e., ET-to-ER-to-ET) state information (BS-CSI) directly. We first analyze the harvested energy obtained using the estimated BS-CSI. Furthermore, we optimize the resource allocation to maximize the total utility of harvested energy. For WET to single ER, we obtain the optimal channel-training energy in a semiclosed form. For WET to multiple ERs, we optimize the channel-training energy and the energy allocation weights for different energy beams. For the straightforward weighted-sum-energy (WSE) maximization, the optimal WET scheme is shown to use only one energy beam, which leads to unfairness among ERs and motivates us to consider the complicated proportional-fair-energy (PFE) maximization. For PFE maximization, we show that it is a biconvex problem, and propose a block-coordinate-descent-based algorithm to find the close-to-optimal solution. Numerical results show that with the optimized solutions, the harvested energy suffers slight reduction of less than 10%, compared to that obtained using the perfect F-CSI.",
"title": ""
},
{
"docid": "neg:1840189_13",
"text": "Plants make the world, a greener and a better place to live in. Although all plants need water to survive, giving them too much or too little can cause them to die. Thus, we need to implement an automatic plant watering system that ensures that the plants are watered at regular intervals, with appropriate amount, whenever they are in need. This paper describes the object oriented design of an IoT based Automated Plant Watering System.",
"title": ""
},
{
"docid": "neg:1840189_14",
"text": "Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.",
"title": ""
},
{
"docid": "neg:1840189_15",
"text": "Measurements of pH, acidity, and alkalinity are commonly used to describe water quality. The three variables are interrelated and can sometimes be confused. The pH of water is an intensity factor, while the acidity and alkalinity of water are capacity factors. More precisely, acidity and alkalinity are defined as a water’s capacity to neutralize strong bases or acids, respectively. The term “acidic” for pH values below 7 does not imply that the water has no alkalinity; likewise, the term “alkaline” for pH values above 7 does not imply that the water has no acidity. Water with a pH value between 4.5 and 8.3 has both total acidity and total alkalinity. The definition of pH, which is based on logarithmic transformation of the hydrogen ion concentration ([H+]), has caused considerable disagreement regarding the appropriate method of describing average pH. The opinion that pH values must be transformed to [H+] values before averaging appears to be based on the concept of mixing solutions of different pH. In practice, however, the averaging of [H+] values will not provide the correct average pH because buffers present in natural waters have a greater effect on final pH than does dilution alone. For nearly all uses of pH in fisheries and aquaculture, pH values may be averaged directly. When pH data sets are transformed to [H+] to estimate average pH, extreme pH values will distort the average pH. Values of pH conform more closely to a normal distribution than do values of [H+], making the pH values more acceptable for use in statistical analysis. Moreover, electrochemical measurements of pH and many biological responses to [H+] are described by the Nernst equation, which states that the measured or observed response is linearly related to 10-fold changes in [H+]. Based on these considerations, pH rather than [H+] is usually the most appropriate variable for use in statistical analysis. *Corresponding author: boydce1@auburn.edu Received November 2, 2010; accepted February 7, 2011 Published online September 27, 2011 Temperature, salinity, hardness, pH, acidity, and alkalinity are fundamental variables that define the quality of water. Although all six variables have precise, unambiguous definitions, the last three variables are often misinterpreted in aquaculture and fisheries studies. In this paper, we explain the concepts of pH, acidity, and alkalinity, and we discuss practical relationships among those variables. We also discuss the concept of pH averaging as an expression of the central tendency of pH measurements. The concept of pH averaging is poorly understood, if not controversial, because many believe that pH values, which are log-transformed numbers, cannot be averaged directly. We argue that direct averaging of pH values is the simplest and most logical approach for most uses and that direct averaging is based on sound practical and statistical principles. THE pH CONCEPT The pH is an index of the hydrogen ion concentration ([H+]) in water. The [H+] affects most chemical and biological processes; thus, pH is an important variable in water quality endeavors. Water temperature probably is the only water quality variable that is measured more commonly than pH. The pH concept has its basis in the ionization of water:",
"title": ""
},
{
"docid": "neg:1840189_16",
"text": "Segmentation of the left ventricle (LV) from cardiac magnetic resonance imaging (MRI) datasets is an essential step for calculation of clinical indices such as ventricular volume and ejection fraction. In this work, we employ deep learning algorithms combined with deformable models to develop and evaluate a fully automatic LV segmentation tool from short-axis cardiac MRI datasets. The method employs deep learning algorithms to learn the segmentation task from the ground true data. Convolutional networks are employed to automatically detect the LV chamber in MRI dataset. Stacked autoencoders are used to infer the LV shape. The inferred shape is incorporated into deformable models to improve the accuracy and robustness of the segmentation. We validated our method using 45 cardiac MR datasets from the MICCAI 2009 LV segmentation challenge and showed that it outperforms the state-of-the art methods. Excellent agreement with the ground truth was achieved. Validation metrics, percentage of good contours, Dice metric, average perpendicular distance and conformity, were computed as 96.69%, 0.94, 1.81 mm and 0.86, versus those of 79.2-95.62%, 0.87-0.9, 1.76-2.97 mm and 0.67-0.78, obtained by other methods, respectively.",
"title": ""
},
{
"docid": "neg:1840189_17",
"text": "In this paper, we develop the nonsubsampled contourlet transform (NSCT) and study its applications. The construction proposed in this paper is based on a nonsubsampled pyramid structure and nonsubsampled directional filter banks. The result is a flexible multiscale, multidirection, and shift-invariant image decomposition that can be efficiently implemented via the a trous algorithm. At the core of the proposed scheme is the nonseparable two-channel nonsubsampled filter bank (NSFB). We exploit the less stringent design condition of the NSFB to design filters that lead to a NSCT with better frequency selectivity and regularity when compared to the contourlet transform. We propose a design framework based on the mapping approach, that allows for a fast implementation based on a lifting or ladder structure, and only uses one-dimensional filtering in some cases. In addition, our design ensures that the corresponding frame elements are regular, symmetric, and the frame is close to a tight one. We assess the performance of the NSCT in image denoising and enhancement applications. In both applications the NSCT compares favorably to other existing methods in the literature",
"title": ""
},
{
"docid": "neg:1840189_18",
"text": "Several diseases and disorders are treatable with therapeutic proteins, but some of these products may induce an immune response, especially when administered as multiple doses over prolonged periods. Antibodies are created by classical immune reactions or by the breakdown of immune tolerance; the latter is characteristic of human homologue products. Many factors influence the immunogenicity of proteins, including structural features (sequence variation and glycosylation), storage conditions (denaturation, or aggregation caused by oxidation), contaminants or impurities in the preparation, dose and length of treatment, as well as the route of administration, appropriate formulation and the genetic characteristics of patients. The clinical manifestations of antibodies directed against a given protein may include loss of efficacy, neutralization of the natural counterpart and general immune system effects (including allergy, anaphylaxis or serum sickness). An upsurge in the incidence of antibody-mediated pure red cell aplasia (PRCA) among patients taking one particular formulation of recombinant human erythropoietin (epoetin-alpha, marketed as Eprex(R)/Erypo(R); Johnson & Johnson) in Europe caused widespread concern. The PRCA upsurge coincided with removal of human serum albumin from epoetin-alpha in 1998 and its replacement with glycine and polysorbate 80. Although the immunogenic potential of this particular product may have been enhanced by the way the product was stored, handled and administered, it should be noted that the subcutaneous route of administration does not confer immunogenicity per se. The possible role of micelle (polysorbate 80 plus epoetin-alpha) formation in the PRCA upsurge with Eprex is currently being investigated.",
"title": ""
},
{
"docid": "neg:1840189_19",
"text": "Fatty acids are essential components of the dynamic lipid metabolism in cells. Fatty acids can also signal to intracellular pathways to trigger a broad range of cellular responses. Oleic acid is an abundant monounsaturated omega-9 fatty acid that impinges on different biological processes, but the mechanisms of action are not completely understood. Here, we report that oleic acid stimulates the cAMP/protein kinase A pathway and activates the SIRT1-PGC1α transcriptional complex to modulate rates of fatty acid oxidation. In skeletal muscle cells, oleic acid treatment increased intracellular levels of cyclic adenosine monophosphate (cAMP) that turned on protein kinase A activity. This resulted in SIRT1 phosphorylation at Ser-434 and elevation of its catalytic deacetylase activity. A direct SIRT1 substrate is the transcriptional coactivator peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α), which became deacetylated and hyperactive after oleic acid treatment. Importantly, oleic acid, but not other long chain fatty acids such as palmitate, increased the expression of genes linked to fatty acid oxidation pathway in a SIRT1-PGC1α-dependent mechanism. As a result, oleic acid potently accelerated rates of complete fatty acid oxidation in skeletal muscle cells. These results illustrate how a single long chain fatty acid specifically controls lipid oxidation through a signaling/transcriptional pathway. Pharmacological manipulation of this lipid signaling pathway might provide therapeutic possibilities to treat metabolic diseases associated with lipid dysregulation.",
"title": ""
}
] |
1840190 | Evaluation of the ARTutor augmented reality educational platform in tertiary education | [
{
"docid": "pos:1840190_0",
"text": "Navigation has been a popular area of research in both academia and industry. Combined with maps, and different localization technologies, navigation systems have become robust and more usable. By combining navigation with augmented reality, it can be improved further to become realistic and user friendly. This paper surveys existing researches carried out in this area, describes existing techniques for building augmented reality navigation systems, and the problems faced.",
"title": ""
}
] | [
{
"docid": "neg:1840190_0",
"text": "is published by Princeton University Press and copyrighted, © 2006, by Princeton University Press. All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher, except for reading and browsing via the World Wide Web. Users are not permitted to mount this file on any network servers.",
"title": ""
},
{
"docid": "neg:1840190_1",
"text": "Vast amounts of artistic data is scattered on-line from both museums and art applications. Collecting, processing and studying it with respect to all accompanying attributes is an expensive process. With a motivation to speed up and improve the quality of categorical analysis in the artistic domain, in this paper we propose an efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain. We continue to show how different multi-task configurations of our method behave on artistic data and outperform handcrafted feature approaches as well as convolutional neural networks. In addition to the method and analysis, we propose a challenge like nature to the new aggregated data set with almost half a million samples and structuredmeta-data to encourage further research and societal engagement. ACM Reference format: Gjorgji Strezoski and Marcel Worring. 2017. OmniArt: Multi-task Deep Learning for Artistic Data Analysis.",
"title": ""
},
{
"docid": "neg:1840190_2",
"text": "Fog computing, as a promising technique, is with huge advantages in dealing with large amounts of data and information with low latency and high security. We introduce a promising multiple access technique entitled nonorthogonal multiple access to provide communication service between the fog layer and the Internet of Things (IoT) device layer in fog computing, and propose a dynamic cooperative framework containing two stages. At the first stage, dynamic IoT device clustering is solved to reduce the system complexity and the delay for the IoT devices with better channel conditions. At the second stage, power allocation based energy management is solved using Nash bargaining solution in each cluster to ensure fairness among IoT devices. Simulation results reveal that our proposed scheme can simultaneously achieve higher spectrum efficiency and ensure fairness among IoT devices compared to other schemes.",
"title": ""
},
{
"docid": "neg:1840190_3",
"text": "Automatic video captioning is challenging due to the complex interactions in dynamic real scenes. A comprehensive system would ultimately localize and track the objects, actions and interactions present in a video and generate a description that relies on temporal localization in order to ground the visual concepts. However, most existing automatic video captioning systems map from raw video data to high level textual description, bypassing localization and recognition, thus discarding potentially valuable information for content localization and generalization. In this work we present an automatic video captioning model that combines spatio-temporal attention and image classification by means of deep neural network structures based on long short-term memory. The resulting system is demonstrated to produce state-of-the-art results in the standard YouTube captioning benchmark while also offering the advantage of localizing the visual concepts (subjects, verbs, objects), with no grounding supervision, over space and time.",
"title": ""
},
{
"docid": "neg:1840190_4",
"text": "As mobile devices are equipped with more memory and computational capability, a novel peer-to-peer communication model for mobile cloud computing is proposed to interconnect nearby mobile devices through various short range radio communication technologies to form mobile cloudlets, where every mobile device works as either a computational service provider or a client of a service requester. Though this kind of computation offloading benefits compute-intensive applications, the corresponding service models and analytics tools are remaining open issues. In this paper we categorize computation offloading into three modes: remote cloud service mode, connected ad hoc cloudlet service mode, and opportunistic ad hoc cloudlet service mode. We also conduct a detailed analytic study for the proposed three modes of computation offloading at ad hoc cloudlet.",
"title": ""
},
{
"docid": "neg:1840190_5",
"text": "Knowledge extraction from unstructured data is a challenging research problem in research domain of Natural Language Processing (NLP). It requires complex NLP tasks like entity extraction and Information Extraction (IE), but one of the most challenging tasks is to extract all the required entities of data in the form of structured format so that data analysis can be applied. Our focus is to explain how the data is extracted in the form of datasets or conventional database so that further text and data analysis can be carried out. This paper presents a framework for Hadith data extraction from the Hadith authentic sources. Hadith is the collection of sayings of Holy Prophet Muhammad, who is the last holy prophet according to Islamic teachings. This paper discusses the preparation of the dataset repository and highlights issues in the relevant research domain. The research problem and their solutions of data extraction, preprocessing and data analysis are elaborated. The results have been evaluated using the standard performance evaluation measures. The dataset is available in multiple languages, multiple formats and is available free of cost for research purposes. Keywords—Data extraction; preprocessing; regex; Hadith; text analysis; parsing",
"title": ""
},
{
"docid": "neg:1840190_6",
"text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.",
"title": ""
},
{
"docid": "neg:1840190_7",
"text": "This paper provides a new metric, knowledge management performance index (KMPI), for assessing the performance of a firm in its knowledge management (KM) at a point in time. Firms are assumed to have always been oriented toward accumulating and applying knowledge to create economic value and competitive advantage. We therefore suggest the need for a KMPI which we have defined as a logistic function having five components that can be used to determine the knowledge circulation process (KCP): knowledge creation, knowledge accumulation, knowledge sharing, knowledge utilization, and knowledge internalization. When KCP efficiency increases, KMPI will also expand, enabling firms to become knowledgeintensive. To prove KMPI’s contribution, a questionnaire survey was conducted on 101 firms listed in the KOSDAQ market in Korea. We associated KMPI with three financial measures: stock price, price earnings ratio (PER), and R&D expenditure. Statistical results show that the proposed KMPI can represent KCP efficiency, while the three financial performance measures are also useful. # 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840190_8",
"text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.",
"title": ""
},
{
"docid": "neg:1840190_9",
"text": "We propose a novel method for predicting image labels by fusing image content descriptors with the social media context of each image. An image uploaded to a social media site such as Flickr often has meaningful, associated information, such as comments and other images the user has uploaded, that is complementary to pixel content and helpful in predicting labels. Prediction challenges such as ImageNet [6]and MSCOCO [19] use only pixels, while other methods make predictions purely from social media context [21]. Our method is based on a novel fully connected Conditional Random Field (CRF) framework, where each node is an image, and consists of two deep Convolutional Neural Networks (CNN) and one Recurrent Neural Network (RNN) that model both textual and visual node/image information. The edge weights of the CRF graph represent textual similarity and link-based metadata such as user sets and image groups. We model the CRF as an RNN for both learning and inference, and incorporate the weighted ranking loss and cross entropy loss into the CRF parameter optimization to handle the training data imbalance issue. Our proposed approach is evaluated on the MIR-9K dataset and experimentally outperforms current state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840190_10",
"text": "This chapter introduces some of the theoretical foundations of swarm intelligence. We focus on the design and implementation of the Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) algorithms for various types of function optimization problems, real world applications and data mining. Results are analyzed, discussed and their potentials are illustrated.",
"title": ""
},
{
"docid": "neg:1840190_11",
"text": "Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.",
"title": ""
},
{
"docid": "neg:1840190_12",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "neg:1840190_13",
"text": "Contemporary vehicles are getting equipped with an increasing number of Electronic Control Units (ECUs) and wireless connectivities. Although these have enhanced vehicle safety and efficiency, they are accompanied with new vulnerabilities. In this paper, we unveil a new important vulnerability applicable to several in-vehicle networks including Control Area Network (CAN), the de facto standard in-vehicle network protocol. Specifically, we propose a new type of Denial-of-Service (DoS), called the bus-off attack, which exploits the error-handling scheme of in-vehicle networks to disconnect or shut down good/uncompromised ECUs. This is an important attack that must be thwarted, since the attack, once an ECU is compromised, is easy to be mounted on safety-critical ECUs while its prevention is very difficult. In addition to the discovery of this new vulnerability, we analyze its feasibility using actual in-vehicle network traffic, and demonstrate the attack on a CAN bus prototype as well as on two real vehicles. Based on our analysis and experimental results, we also propose and evaluate a mechanism to detect and prevent the bus-off attack.",
"title": ""
},
{
"docid": "neg:1840190_14",
"text": "Cybercrime is all about the crimes in which communication channel and communication device has been used directly or indirectly as a medium whether it is a Laptop, Desktop, PDA, Mobile phones, Watches, Vehicles. The report titled “Global Risks for 2012”, predicts cyber-attacks as one of the top five risks in the World for Government and business sector. Cyber crime is a crime which is harder to detect and hardest to stop once occurred causing a long term negative impact on victims. With the increasing popularity of online banking, online shopping which requires sensitive personal and financial data, it is a term that we hear in the news with some frequency. Now, in order to protect ourselves from this crime we need to know what it is and how it does works against us. This paper presents a brief overview of all about cyber criminals and crime with its evolution, types, case study, preventive majors and the department working to combat this crime.",
"title": ""
},
{
"docid": "neg:1840190_15",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
},
{
"docid": "neg:1840190_16",
"text": "Suicides committed by intraorally placed firecrackers are rare events. Given to the use of more powerful components such as flash powder recently, some firecrackers may cause massive life-threatening injuries in case of such misuse. Innocuous black powder firecrackers are subject to national explosives legislation and only have the potential to cause harmless injuries restricted to the soft tissue. We here report two cases of suicide committed by an intraoral placement of firecrackers, resulting in similar patterns of skull injury. As it was first unknown whether black powder firecrackers can potentially cause serious skull injury, we compared the potential of destruction using black powder and flash powder firecrackers in a standardized skull simulant model (Synbone, Malans, Switzerland). This was the first experiment to date simulating the impacts resulting from an intraoral burst in a skull simulant model. The intraoral burst of a “D-Böller” (an example of one of the most powerful black powder firecrackers in Germany) did not lead to any injuries of the osseous skull. In contrast, the “La Bomba” (an example of the weakest known flash powder firecrackers) caused complex fractures of both the viscero- and neurocranium. The results obtained from this experimental study indicate that black powder firecrackers are less likely to cause severe injuries as a consequence of intraoral explosions, whereas flash powder-based crackers may lead to massive life-threatening craniofacial destructions and potentially death.",
"title": ""
},
{
"docid": "neg:1840190_17",
"text": "Enforcing security in Internet of Things environments has been identified as one of the top barriers for realizing the vision of smart, energy-efficient homes and buildings. In this context, understanding the risks related to the use and potential misuse of information about homes, partners, and end-users, as well as, forming methods for integrating security-enhancing measures in the design is not straightforward and thus requires substantial investigation. A risk analysis applied on a smart home automation system developed in a research project involving leading industrial actors has been conducted. Out of 32 examined risks, 9 were classified as low and 4 as high, i.e., most of the identified risks were deemed as moderate. The risks classified as high were either related to the human factor or to the software components of the system. The results indicate that with the implementation of standard security features, new, as well as, current risks can be minimized to acceptable levels albeit that the most serious risks, i.e., those derived from the human factor, need more careful consideration, as they are inherently complex to handle. A discussion of the implications of the risk analysis results points to the need for a more general model of security and privacy included in the design phase of smart homes. With such a model of security and privacy in design in place, it will contribute to enforcing system security and enhancing user privacy in smart homes, and thus helping to further realize the potential in such IoT environments.",
"title": ""
},
{
"docid": "neg:1840190_18",
"text": "The advent of both Cloud computing and Internet of Things (IoT) is changing the way of conceiving information and communication systems. Generally, we talk about IoT Cloud to indicate a new type of distributed system consisting of a set of smart devices interconnected with a remote Cloud infrastructure, platform, or software through the Internet and able to provide IoT as a Service (IoTaaS). In this paper, we discuss the near future evolution of IoT Clouds towards federated ecosystems, where IoT providers cooperate to offer more flexible services. Moreover, we present a general three-layer IoT Cloud Federation architecture, highlighting new business opportunities and challenges.",
"title": ""
},
{
"docid": "neg:1840190_19",
"text": "Base on innovation resistance theory, this research builds the model of factors affecting consumers' resistance in using online travel in Thailand. Through the questionnaires and the SEM methods, empirical analysis results show that functional barriers are even greater sources of resistance to online travel website than psychological barriers. Online experience and independent travel experience have significantly influenced on consumer innovation resistance. Social influence plays an important role in this research.",
"title": ""
}
] |
1840191 | Malacology: A Programmable Storage System | [
{
"docid": "pos:1840191_0",
"text": "Distributed systems are easier to build than ever with the emergence of new, data-centric abstractions for storing and computing over massive datasets. However, similar abstractions do not exist for storing and accessing meta-data. To fill this gap, Tango provides developers with the abstraction of a replicated, in-memory data structure (such as a map or a tree) backed by a shared log. Tango objects are easy to build and use, replicating state via simple append and read operations on the shared log instead of complex distributed protocols; in the process, they obtain properties such as linearizability, persistence and high availability from the shared log. Tango also leverages the shared log to enable fast transactions across different objects, allowing applications to partition state across machines and scale to the limits of the underlying log without sacrificing consistency.",
"title": ""
}
] | [
{
"docid": "neg:1840191_0",
"text": "Sentiment analysis is the automatic classification of the overall opinion conveyed by a text towards its subject matter. This paper discusses an experiment in the sentiment analysis of of a collection of movie reviews that have been automatically translated to Indonesian. Following [1], we employ three well known classification techniques: naive bayes, maximum entropy, and support vector machines, employing unigram presence and frequency values as the features. The translation is achieved through machine translation and simple word substitutions based on a bilingual dictionary constructed from various online resources. Analysis of the Indonesian translations yielded an accuracy of up to 78.82%, still short of the accuracy for the English documents (80.09%), but satisfactorily high given the simple translation approach.",
"title": ""
},
{
"docid": "neg:1840191_1",
"text": "BACKGROUND\nRecent estimates concerning the prevalence of autistic spectrum disorder are much higher than those reported 30 years ago, with at least 1 in 400 children affected. This group of children and families have important service needs. The involvement of parents in implementing intervention strategies designed to help their autistic children has long been accepted as helpful. The potential benefits are increased skills and reduced stress for parents as well as children.\n\n\nOBJECTIVES\nThe objective of this review was to determine the extent to which parent-mediated early intervention has been shown to be effective in the treatment of children aged 1 year to 6 years 11 months with autistic spectrum disorder. In particular, it aimed to assess the effectiveness of such interventions in terms of the benefits for both children and their parents.\n\n\nSEARCH STRATEGY\nA range of psychological, educational and biomedical databases were searched. Bibliographies and reference lists of key articles were searched, field experts were contacted and key journals were hand searched.\n\n\nSELECTION CRITERIA\nOnly randomised or quasi-randomised studies were included. Study interventions had a significant focus on parent-implemented early intervention, compared to a group of children who received no treatment, a waiting list group or a different form of intervention. There was at least one objective, child related outcome measure.\n\n\nDATA COLLECTION AND ANALYSIS\nAppraisal of the methodological quality of included studies was carried out independently by two reviewers. Differences between the included studies in terms of the type of intervention, the comparison groups used and the outcome measures were too great to allow for direct comparison.\n\n\nMAIN RESULTS\nThe results of this review are based on data from two studies. Two significant results were found to favour parent training in one study: child language and maternal knowledge of autism. In the other, intensive intervention (involving parents, but primarily delivered by professionals) was associated with better child outcomes on direct measurement than were found for parent-mediated early intervention, but no differences were found in relation to measures of parent and teacher perceptions of skills and behaviours.\n\n\nREVIEWER'S CONCLUSIONS\nThis review has little to offer in the way of implications for practice: there were only two studies, the numbers of participants included were small, and the two studies could not be compared directly to one another. In terms of research, randomised controlled trials involving large samples need to be carried out, involving both short and long-term outcome information and full economic evaluations. Research in this area is hampered by barriers to randomisation, such as availability of equivalent services.",
"title": ""
},
{
"docid": "neg:1840191_2",
"text": "Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies.",
"title": ""
},
{
"docid": "neg:1840191_3",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs) [1]. Brain-computer interfaces are devices that process a user’s brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted non-disabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming.",
"title": ""
},
{
"docid": "neg:1840191_4",
"text": "Modern Internet-enabled smart lights promise energy efficiency and many additional capabilities over traditional lamps. However, these connected lights create a new attack surface, which can be maliciously used to violate users’ privacy and security. In this paper, we design and evaluate novel attacks that take advantage of light emitted by modern smart bulbs in order to infer users’ private data and preferences. The first two attacks are designed to infer users’ audio and video playback by a systematic observation and analysis of the multimediavisualization functionality of smart light bulbs. The third attack utilizes the infrared capabilities of such smart light bulbs to create a covert-channel, which can be used as a gateway to exfiltrate user’s private data out of their secured home or office network. A comprehensive evaluation of these attacks in various real-life settings confirms their feasibility and affirms the need for new privacy protection mechanisms.",
"title": ""
},
{
"docid": "neg:1840191_5",
"text": "Complex Event Processing (CEP) is a stream processing model that focuses on detecting event patterns in continuous event streams. While the CEP model has gained popularity in the research communities and commercial technologies, the problem of gracefully degrading performance under heavy load in the presence of resource constraints, or load shedding, has been largely overlooked. CEP is similar to “classical” stream data management, but addresses a substantially different class of queries. This unfortunately renders the load shedding algorithms developed for stream data processing inapplicable. In this paper we study CEP load shedding under various resource constraints. We formalize broad classes of CEP load-shedding scenarios as different optimization problems. We demonstrate an array of complexity results that reveal the hardness of these problems and construct shedding algorithms with performance guarantees. Our results shed some light on the difficulty of developing load-shedding algorithms that maximize utility.",
"title": ""
},
{
"docid": "neg:1840191_6",
"text": "0957-4174/$ see front matter 2008 Elsevier Ltd. A doi:10.1016/j.eswa.2008.06.054 * Corresponding author. Address: School of Compu ogy, Beijing Jiaotong University, Beijing 100044, Chin E-mail address: jnchen06@163.com (J. Chen). As an important preprocessing technology in text classification, feature selection can improve the scalability, efficiency and accuracy of a text classifier. In general, a good feature selection method should consider domain and algorithm characteristics. As the Naïve Bayesian classifier is very simple and efficient and highly sensitive to feature selection, so the research of feature selection specially for it is significant. This paper presents two feature evaluation metrics for the Naïve Bayesian classifier applied on multiclass text datasets: Multi-class Odds Ratio (MOR), and Class Discriminating Measure (CDM). Experiments of text classification with Naïve Bayesian classifiers were carried out on two multi-class texts collections. As the results indicate, CDM and MOR gain obviously better selecting effect than other feature selection approaches. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840191_7",
"text": "In order to dock an embedded intelligent wheelchair into a U-shape bed automatically through visual servo, this paper proposes a real-time U-shape bed localization method on an embedded vision system based on FPGA and DSP. This method locates the U-shape bed through finding its line contours. The task can be done in a parallel way with FPGA does line extraction and DSP does line contour finding. Experiments show that, the speed and precision of the U-shape bed localization method proposed in this paper which based on an embedded vision system can satisfy the needs of the system.",
"title": ""
},
{
"docid": "neg:1840191_8",
"text": "The decoder of the sphinx-4 speech recognition system incorporates several new design strategies which have not been used earlier in conventional decoders of HMM-based large vocabulary speech recognition systems. Some new design aspects include graph construction for multilevel parallel decoding with independent simultaneous feature streams without the use of compound HMMs, the incorporation of a generalized search algorithm that subsumes Viterbi and full-forward decoding as special cases, design of generalized language HMM graphs from grammars and language models of multiple standard formats, that toggles trivially from flat search structure to tree search structure etc. This paper describes some salient design aspects of the Sphinx-4 decoder and includes preliminary performance measures relating to speed and accu-",
"title": ""
},
{
"docid": "neg:1840191_9",
"text": "In this paper we present CMUcam3, a low-cost, open source, em bedded computer vision platform. The CMUcam3 is the third generation o f the CMUcam system and is designed to provide a flexible and easy to use ope n source development environment along with a more powerful hardware platfo rm. The goal of the system is to provide simple vision capabilities to small emb dded systems in the form of an intelligent sensor that is supported by an open sou rce community. The hardware platform consists of a color CMOS camera, a frame bu ff r, a low cost 32bit ARM7TDMI microcontroller, and an MMC memory card slot. T he CMUcam3 also includes 4 servo ports, enabling one to create entire, w orking robots using the CMUcam3 board as the only requisite robot processor. Cus tom C code can be developed using an optimized GNU toolchain and executabl es can be flashed onto the board using a serial port without external download ing hardware. The development platform includes a virtual camera target allowi ng for rapid application development exclusively on a PC. The software environment c omes with numerous open source example applications and libraries includi ng JPEG compression, frame differencing, color tracking, convolutions, histog ramming, edge detection, servo control, connected component analysis, FAT file syste m upport, and a face detector.",
"title": ""
},
{
"docid": "neg:1840191_10",
"text": "In this paper, we present the design and fabrication of a centimeter-scale propulsion system for a robotic fish. The key to the design is selection of an appropriate actuator and a body frame that is simple and compact. SMA spring actuators are customized to provide the necessary work output for the microrobotic fish. The flexure joints, electrical wiring and attachment pads for SMA actuators are all embedded in a single layer of copper laminated polymer film, sandwiched between two layers of glass fiber. Instead of using individual actuators to rotate each joint, each actuator rotates all the joints to a certain mode shape and undulatory motion is created by a timed sequence of these mode shapes. Subcarangiform swimming mode of minnows has been emulated using five links and four actuators. The size of the four-joint propulsion system is 6 mm wide, 40 mm long with the body frame thickness of 0.25 mm.",
"title": ""
},
{
"docid": "neg:1840191_11",
"text": "INTRODUCTION\nDisruptions in sleep and circadian rhythms are observed in individuals with bipolar disorders (BD), both during acute mood episodes and remission. Such abnormalities may relate to dysfunction of the molecular circadian clock and could offer a target for new drugs.\n\n\nAREAS COVERED\nThis review focuses on clinical, actigraphic, biochemical and genetic biomarkers of BDs, as well as animal and cellular models, and highlights that sleep and circadian rhythm disturbances are closely linked to the susceptibility to BDs and vulnerability to mood relapses. As lithium is likely to act as a synchronizer and stabilizer of circadian rhythms, we will review pharmacogenetic studies testing circadian gene polymorphisms and prophylactic response to lithium. Interventions such as sleep deprivation, light therapy and psychological therapies may also target sleep and circadian disruptions in BDs efficiently for treatment and prevention of bipolar depression.\n\n\nEXPERT OPINION\nWe suggest that future research should clarify the associations between sleep and circadian rhythm disturbances and alterations of the molecular clock in order to identify critical targets within the circadian pathway. The investigation of such targets using human cellular models or animal models combined with 'omics' approaches are crucial steps for new drug development.",
"title": ""
},
{
"docid": "neg:1840191_12",
"text": "Studying temporal dynamics of topics in social media is very useful to understand online user behaviors. Most of the existing work on this subject usually monitors the global trends, ignoring variation among communities. Since users from different communities tend to have varying tastes and interests, capturing communitylevel temporal change can improve the understanding and management of social content. Additionally, it can further facilitate the applications such as community discovery, temporal prediction and online marketing. However, this kind of extraction becomes challenging due to the intricate interactions between community and topic, and intractable computational complexity. In this paper, we take a unified solution towards the communitylevel topic dynamic extraction. A probabilistic model, CosTot (Community Specific Topics-over-Time) is proposed to uncover the hidden topics and communities, as well as capture community-specific temporal dynamics. Specifically, CosTot considers text, time, and network information simultaneously, and well discovers the interactions between community and topic over time. We then discuss the approximate inference implementation to enable scalable computation of model parameters, especially for large social data. Based on this, the application layer support for multi-scale temporal analysis and community exploration is also investigated. We conduct extensive experimental studies on a large real microblog dataset, and demonstrate the superiority of proposed model on tasks of time stamp prediction, link prediction and topic perplexity.",
"title": ""
},
{
"docid": "neg:1840191_13",
"text": "Cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome) is characterized by deficits in executive function, linguistic processing, spatial cognition, and affect regulation. Diagnosis currently relies on detailed neuropsychological testing. The aim of this study was to develop an office or bedside cognitive screen to help identify CCAS in cerebellar patients. Secondary objectives were to evaluate whether available brief tests of mental function detect cognitive impairment in cerebellar patients, whether cognitive performance is different in patients with isolated cerebellar lesions versus complex cerebrocerebellar pathology, and whether there are cognitive deficits that should raise red flags about extra-cerebellar pathology. Comprehensive standard neuropsychological tests, experimental measures and clinical rating scales were administered to 77 patients with cerebellar disease-36 isolated cerebellar degeneration or injury, and 41 complex cerebrocerebellar pathology-and to healthy matched controls. Tests that differentiated patients from controls were used to develop a screening instrument that includes the cardinal elements of CCAS. We validated this new scale in a new cohort of 39 cerebellar patients and 55 healthy controls. We confirm the defining features of CCAS using neuropsychological measures. Deficits in executive function were most pronounced for working memory, mental flexibility, and abstract reasoning. Language deficits included verb for noun generation and phonemic > semantic fluency. Visual spatial function was degraded in performance and interpretation of visual stimuli. Neuropsychiatric features included impairments in attentional control, emotional control, psychosis spectrum disorders and social skill set. From these results, we derived a 10-item scale providing total raw score, cut-offs for each test, and pass/fail criteria that determined 'possible' (one test failed), 'probable' (two tests failed), and 'definite' CCAS (three tests failed). When applied to the exploratory cohort, and administered to the validation cohort, the CCAS/Schmahmann scale identified sensitivity and selectivity, respectively as possible exploratory cohort: 85%/74%, validation cohort: 95%/78%; probable exploratory cohort: 58%/94%, validation cohort: 82%/93%; and definite exploratory cohort: 48%/100%, validation cohort: 46%/100%. In patients in the exploratory cohort, Mini-Mental State Examination and Montreal Cognitive Assessment scores were within normal range. Complex cerebrocerebellar disease patients were impaired on similarities in comparison to isolated cerebellar disease. Inability to recall words from multiple choice occurred only in patients with extra-cerebellar disease. The CCAS/Schmahmann syndrome scale is useful for expedited clinical assessment of CCAS in patients with cerebellar disorders.awx317media15678692096001.",
"title": ""
},
{
"docid": "neg:1840191_14",
"text": "During crowded events, cellular networks face voice and data traffic volumes that are often orders of magnitude higher than what they face during routine days. Despite the use of portable base stations for temporarily increasing communication capacity and free Wi-Fi access points for offloading Internet traffic from cellular base stations, crowded events still present significant challenges for cellular network operators looking to reduce dropped call events and improve Internet speeds. For effective cellular network design, management, and optimization, it is crucial to understand how cellular network performance degrades during crowded events, what causes this degradation, and how practical mitigation schemes would perform in real-life crowded events. This paper makes a first step towards this end by characterizing the operational performance of a tier-1 cellular network in the United States during two high-profile crowded events in 2012. We illustrate how the changes in population distribution, user behavior, and application workload during crowded events result in significant voice and data performance degradation, including more than two orders of magnitude increase in connection failures. Our findings suggest two mechanisms that can improve performance without resorting to costly infrastructure changes: radio resource allocation tuning and opportunistic connection sharing. Using trace-driven simulations, we show that more aggressive release of radio resources via 1-2 seconds shorter RRC timeouts as compared to routine days helps to achieve better tradeoff between wasted radio resources, energy consumption, and delay during crowded events; and opportunistic connection sharing can reduce connection failures by 95% when employed by a small number of devices in each cell sector.",
"title": ""
},
{
"docid": "neg:1840191_15",
"text": "A real-time algorithm to detect eye blinks in a video sequence from a standard camera is proposed. Recent landmark detectors, trained on in-the-wild datasets exhibit excellent robustness against face resolution, varying illumination and facial expressions. We show that the landmarks are detected precisely enough to reliably estimate the level of the eye openness. The proposed algorithm therefore estimates the facial landmark positions, extracts a single scalar quantity – eye aspect ratio (EAR) – characterizing the eye openness in each frame. Finally, blinks are detected either by an SVM classifier detecting eye blinks as a pattern of EAR values in a short temporal window or by hidden Markov model that estimates the eye states followed by a simple state machine recognizing the blinks according to the eye closure lengths. The proposed algorithm has comparable results with the state-of-the-art methods on three standard datasets.",
"title": ""
},
{
"docid": "neg:1840191_16",
"text": "An ever-increasing amount of information on the Web today is available only through search interfaces: the users have to type in a set of keywords in a search form in order to access the pages from certain Web sites. These pages are often referred to as the Hidden Web or the Deep Web. Since there are no static links to the Hidden Web pages, search engines cannot discover and index such pages and thus do not return them in the results. However, according to recent studies, the content provided by many Hidden Web sites is often of very high quality and can be extremely valuable to many users. In this paper, we study how we can build an effective Hidden Web crawler that can autonomously discover and download pages from the Hidden Web. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden Web crawler has to face is how to automatically generate meaningful queries to issue to the site. Here, we provide a theoretical framework to investigate the query generation problem for the Hidden Web and we propose effective policies for generating queries automatically. Our policies proceed iteratively, issuing a different query in every iteration. We experimentally evaluate the effectiveness of these policies on 4 real Hidden Web sites and our results are very promising. For instance, in one experiment, one of our policies downloaded more than 90% of a Hidden Web site (that contains 14 million documents) after issuing fewer than 100 queries.",
"title": ""
},
{
"docid": "neg:1840191_17",
"text": "Botnet is one of the most serious threats to cyber security as it provides a distributed platform for several illegal activities. Regardless of the availability of numerous methods proposed to detect botnets, still it is a challenging issue as botmasters are continuously improving bots to make them stealthier and evade detection. Most of the existing detection techniques cannot detect modern botnets in an early stage, or they are specific to command and control protocol and structures. In this paper, we propose a novel approach to detect botnets irrespective of their structures, based on network traffic flow behavior analysis and machine learning techniques. The experimental evaluation of the proposed method with real-world benchmark datasets shows the efficiency of the method. Also, the system is able to identify the new botnets with high detection accuracy and low false positive rate. © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] |
1840192 | Towards an Ontology-Driven Blockchain Design for Supply Chain Provenance | [
{
"docid": "pos:1840192_0",
"text": "increasingly agile and integrated across their functions. Enterprise models play a critical role in this integration, enabling better designs for enterprises, analysis of their performance, and management of their operations. This article motivates the need for enterprise models and introduces the concepts of generic and deductive enterprise models. It reviews research to date on enterprise modeling and considers in detail the Toronto virtual enterprise effort at the University of Toronto.",
"title": ""
}
] | [
{
"docid": "neg:1840192_0",
"text": "Dropout prediction in MOOCs is a well-researched problem where we classify which students are likely to persist or drop out of a course. Most research into creating models which can predict outcomes is based on student engagement data. Why these students might be dropping out has only been studied through retroactive exit surveys. This helps identify an important extension area to dropout prediction— how can we interpret dropout predictions at the student and model level? We demonstrate how existing MOOC dropout prediction pipelines can be made interpretable, all while having predictive performance close to existing techniques. We explore each stage of the pipeline as design components in the context of interpretability. Our end result is a layer which longitudinally interprets both predictions and entire classification models of MOOC dropout to provide researchers with in-depth insights of why a student is likely to dropout.",
"title": ""
},
{
"docid": "neg:1840192_1",
"text": "Mobile robot vision-based navigation has been the source of countless research contributions, from the domains of both vision and control. Vision is becoming more and more common in applications such as localization, automatic map construction, autonomous navigation, path following, inspection, monitoring or risky situation detection. This survey presents those pieces of work, from the nineties until nowadays, which constitute a wide progress in visual navigation techniques for land, aerial and autonomous underwater vehicles. The paper deals with two major approaches: map-based navigation and mapless navigation. Map-based navigation has been in turn subdivided in metric map-based navigation and topological map-based navigation. Our outline to mapless navigation includes reactive techniques based on qualitative characteristics extraction, appearance-based localization, optical flow, features tracking, plane ground detection/tracking, etc... The recent concept of visual sonar has also been revised.",
"title": ""
},
{
"docid": "neg:1840192_2",
"text": "In this paper, we propose a method that infers both accurate depth maps and color-consistent stereo images for radiometrically varying stereo images. In general, stereo matching and performing color consistency between stereo images are a chicken-and-egg problem since it is not a trivial task to simultaneously achieve both goals. Hence, we have developed an iterative framework in which these two processes can boost each other. First, we transform the input color images to log-chromaticity color space, from which a linear relationship can be established during constructing a joint pdf of transformed left and right color images. From this joint pdf, we can estimate a linear function that relates the corresponding pixels in stereo images. Based on this linear property, we present a new stereo matching cost by combining Mutual Information (MI), SIFT descriptor, and segment-based plane-fitting to robustly find correspondence for stereo image pairs which undergo radiometric variations. Meanwhile, we devise a Stereo Color Histogram Equalization (SCHE) method to produce color-consistent stereo image pairs, which conversely boost the disparity map estimation. Experimental results show that our method produces both accurate depth maps and color-consistent stereo images, even for stereo images with severe radiometric differences.",
"title": ""
},
{
"docid": "neg:1840192_3",
"text": "Dance imagery is a consciously created mental representation of an experience, either real or imaginary, that may affect the dancer and her or his movement. In this study, imagery research in dance was reviewed in order to: 1. describe the themes and ideas that the current literature has attempted to illuminate and 2. discover the extent to which this literature fits the Revised Applied Model of Deliberate Imagery Use. A systematic search was performed, and 43 articles from 24 journals were found to fit the inclusion criteria. The articles were reviewed, analyzed, and categorized. The findings from the articles were then reported using the Revised Applied Model as a framework. Detailed descriptions of Who, What, When and Where, Why, How, and Imagery Ability were provided, along with comparisons to the field of sports imagery. Limitations within the field, such as the use of non-dance-specific and study-specific measurements, make comparisons and clear conclusions difficult to formulate. Future research can address these problems through the creation of dance-specific measurements, higher participant rates, and consistent methodologies between studies.",
"title": ""
},
{
"docid": "neg:1840192_4",
"text": "We have demonstrated a RLC matched GaN HEMT power amplifier with 12 dB gain, 0.05-2.0 GHz bandwidth, 8 W CW output power and 36.7-65.4% drain efficiency over the band. The amplifier is packaged in a ceramic S08 package and contains a GaN on SiC device operating at 28 V drain voltage, alongside GaAs integrated passive matching circuitry. A second circuit designed for 48 V operation and 15 W CW power over the same band, obtains over 20 W under pulsed condition with 10% duty cycle and 100 mus pulse width. CW measurements are pending after assembly in an alternate high power package. These amplifiers are suitable for use in wideband digital cellular infrastructure, handheld radios, and jamming applications.",
"title": ""
},
{
"docid": "neg:1840192_5",
"text": "Retailing, and particularly fashion retailing, is changing into a much more technology driven business model using omni-channel retailing approaches. Also analytical and data-driven marketing is on the rise. However, there has not been paid a lot of attention to the underlying and underpinning datastructures, the characteristics for fashion retailing, the relationship between static and dynamic data, and the governance of this. This paper is analysing and discussing the data dimension of fashion retailing with focus on data-model development, master data management and the impact of this on business development in the form of increased operational effectiveness, better adaptation the omni-channel environment and improved alignment between the business strategy and the supporting data. The paper presents a case study of a major European fashion retail and wholesale company that is in the process of reorganising its master data model and master data governance to remove silos of data, connect and utilise data across business processes, and design a global product master data database that integrates data for all existing and expected sales channels. As a major finding of this paper is fashion retailing needs more strict master data governance than general retailing as products are plenty, designed products are not necessarily marketed, and product life-cycles generally are short.",
"title": ""
},
{
"docid": "neg:1840192_6",
"text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.",
"title": ""
},
{
"docid": "neg:1840192_7",
"text": "A systematic review was conducted to evaluate whether chocolate or its constituents were capable of influencing cognitive function and/or mood. Studies investigating potentially psychoactive fractions of chocolate were also included. Eight studies (in six articles) met the inclusion criteria for assessment of chocolate or its components on mood, of which five showed either an improvement in mood state or an attenuation of negative mood. Regarding cognitive function, eight studies (in six articles) met the criteria for inclusion, of which three revealed clear evidence of cognitive enhancement (following cocoa flavanols and methylxanthine). Two studies failed to demonstrate behavioral benefits but did identify significant alterations in brain activation patterns. It is unclear whether the effects of chocolate on mood are due to the orosensory characteristics of chocolate or to the pharmacological actions of chocolate constituents. Two studies have reported acute cognitive effects of supplementation with cocoa polyphenols. Further exploration of the effect of chocolate on cognitive facilitation is recommended, along with substantiation of functional brain changes associated with the components of cocoa.",
"title": ""
},
{
"docid": "neg:1840192_8",
"text": "Given a fixed budget and an arbitrary cost for selecting each node, the budgeted influence maximization (BIM) problem concerns selecting a set of seed nodes to disseminate some information that maximizes the total number of nodes influenced (termed as influence spread) in social networks at a total cost no more than the budget. Our proposed seed selection algorithm for the BIM problem guarantees an approximation ratio of (1-1/√e). The seed selection algorithm needs to calculate the influence spread of candidate seed sets, which is known to be #P-complex. Identifying the linkage between the computation of marginal probabilities in Bayesian networks and the influence spread, we devise efficient heuristic algorithms for the latter problem. Experiments using both large-scale social networks and synthetically generated networks demonstrate superior performance of the proposed algorithm with moderate computation costs. Moreover, synthetic datasets allow us to vary the network parameters and gain important insights on the impact of graph structures on the performance of different algorithms.",
"title": ""
},
{
"docid": "neg:1840192_9",
"text": "Steganalysis is the art of detecting the message's existence and blockading the covert communication. Various steganography techniques have been proposed in literature. The Least Significant Bit (LSB) steganography is one such technique in which least significant bit of the image is replaced with data bit. As this method is vulnerable to steganalysis so as to make it more secure we encrypt the raw data before embedding it in the image. Though the encryption process increases the time complexity, but at the same time provides higher security also. This paper uses two popular techniques Rivest, Shamir, Adleman (RSA) algorithm and Diffie Hellman algorithm to encrypt the data. The result shows that the use of encryption in Steganalysis does not affect the time complexity if Diffie Hellman algorithm is used in stead of RSA algorithm.",
"title": ""
},
{
"docid": "neg:1840192_10",
"text": "We develop a profit-maximizing neoclassical model of optimal firm size and growth across different industries based on differences in industry fundamentals and firm productivity. In the model, a conglomerate discount is consistent with profit maximization. The model predicts how conglomerate firms will allocate resources across divisions over the business cycle and how their responses to industry shocks will differ from those of single-segment firms. Using plant level data, we find that growth and investment of conglomerate and single-segment firms is related to fundamental industry factors and individual segment level productivity. The majority of conglomerate firms exhibit growth across industry segments that is consistent with optimal behavior. SEVERAL RECENT ACADEMIC PAPERS and the business press claim that conglomerate firms destroy value and do a poor job of investing across business segments.1 Explanations for this underperformance share the idea that there is an imperfection either in firm governance ~agency theory! or in financial markets ~incorrect valuation of firm industry segments!. These studies implicitly assume that the conglomerates and single-industry firms possess similar ability to compete, and that they differ mainly in that conglomerates * Robert H. Smith School of Business, The University of Maryland. A previous draft was circulated and presented with the title “Optimal Firm Size and the Growth of Conglomerate and Single-industry Firms.” This research was supported by NSF Grant #SBR-9709427. We wish to thank the editor, René Stulz, two anonymous referees, John Graham, Robert Hauswald, Sheridan Titman, Anjan Thakor, Haluk Unal, and seminar participants at Carnegie Mellon, Colorado, Harvard, Illinois, Indiana, Maryland, Michigan, Ohio State, the 1999 AFA meetings, the National Bureau of Economic Research and the 1998 Finance and Accounting conference at NYU for their comments. We also wish to thank researchers at the Center for Economic Studies for comments and their help with the data used in this study. The research was conducted at the Center for Economic Studies, U.S. Bureau of the Census, Department of Commerce. The authors alone are responsible for the work and any errors or omissions. 1 Lang and Stulz ~1994!, Berger and Ofek ~1995!, and Comment and Jarrell ~1995! document a conglomerate discount in the stock market and low returns to conglomerate firms. Rajan, Servaes, and Zingales ~2000! and Scharfstein ~1997! examine conglomerate investment across different business segments. Lamont ~1997! and Shin and Stulz ~1997! examine the relation of investment to cash f low for conglomerate industry segments. For an example from the business press see Deutsch ~1998!. THE JOURNAL OF FINANCE • VOL. LVII, NO. 2 • APRIL 2002",
"title": ""
},
{
"docid": "neg:1840192_11",
"text": "This letter is concerned with the stability analysis of neural networks (NNs) with time-varying interval delay. The relationship between the time-varying delay and its lower and upper bounds is taken into account when estimating the upper bound of the derivative of Lyapunov functional. As a result, some improved delay/interval-dependent stability criteria for NNs with time-varying interval delay are proposed. Numerical examples are given to demonstrate the effectiveness and the merits of the proposed method.",
"title": ""
},
{
"docid": "neg:1840192_12",
"text": "This paper presents the submission of the Linguistics Department of the University of Colorado at Boulder for the 2017 CoNLL-SIGMORPHON Shared Task on Universal Morphological Reinflection. The system is implemented as an RNN Encoder-Decoder. It is specifically geared toward a low-resource setting. To this end, it employs data augmentation for counteracting overfitting and a copy symbol for processing characters unseen in the training data. The system is an ensemble of ten models combined using a weighted voting scheme. It delivers substantial improvement in accuracy compared to a non-neural baseline system in presence of varying amounts of training data.",
"title": ""
},
{
"docid": "neg:1840192_13",
"text": "This paper studies the use of received signal strength indicators (RSSI) applied to fingerprinting method in a Bluetooth network for indoor positioning. A Bayesian fusion (BF) method is proposed to combine the statistical information from the RSSI measurements and the prior information from a motion model. Indoor field tests are carried out to verify the effectiveness of the method. Test results show that the proposed BF algorithm achieves a horizontal positioning accuracy of about 4.7 m on the average, which is about 6 and 7 % improvement when compared with Bayesian static estimation and a point Kalman filter method, respectively.",
"title": ""
},
{
"docid": "neg:1840192_14",
"text": "Machine translation is highly sensitive to the size and quality of the training data, which has led to an increasing interest in collecting and filtering large parallel corpora. In this paper, we propose a new method for this task based on multilingual sentence embeddings. Our approach uses an encoder-decoder trained over an initial parallel corpus to build multilingual sentence representations, which are then incorporated into a new margin-based method to score, mine and filter parallel sentences. In contrast to previous approaches, which rely on nearest neighbor retrieval with a hard threshold over cosine similarity, our proposed method accounts for the scale inconsistencies of this measure, considering the margin between a given sentence pair and its closest candidates instead. Our experiments show large improvements over existing methods. We outperform the best published results on the BUCC shared task on parallel corpus mining by more than 10 F1 points. We also improve the precision from 48.9 to 83.3 on the reconstruction of 11.3M English-French sentence pairs of the UN corpus. Finally, filtering the English-German ParaCrawl corpus with our approach, we obtain 31.2 BLEU points on newstest2014, an improvement of more than one point over the best official filtered version.",
"title": ""
},
{
"docid": "neg:1840192_15",
"text": "The improvement of many applications such as web search, latency reduction, and personalization/ recommendation systems depends on surfing prediction. Predicting user surfing paths involves tradeoffs between model complexity and predictive accuracy. In this paper, we combine two classification techniques, namely, the Markov model and Support Vector Machines (SVM), to resolve prediction using Dempster’s rule. Such fusion overcomes the inability of the Markov model in predicting the unseen data as well as overcoming the problem of multiclassification in the case of SVM, especially when dealing with large number of classes. We apply feature extraction to increase the power of discrimination of SVM. In addition, during prediction we employ domain knowledge to reduce the number of classifiers for the improvement of accuracy and the reduction of prediction time. We demonstrate the effectiveness of our hybrid approach by comparing our results with widely used techniques, namely, SVM, the Markov model, and association rule mining.",
"title": ""
},
{
"docid": "neg:1840192_16",
"text": "This paper presents a new approach for recognition of 3D objects that are represented as 3D point clouds. We introduce a new 3D shape descriptor called Intrinsic Shape Signature (ISS) to characterize a local/semi-local region of a point cloud. An intrinsic shape signature uses a view-independent representation of the 3D shape to match shape patches from different views directly, and a view-dependent transform encoding the viewing geometry to facilitate fast pose estimation. In addition, we present a highly efficient indexing scheme for the high dimensional ISS shape descriptors, allowing for fast and accurate search of large model databases. We evaluate the performance of the proposed algorithm on a very challenging task of recognizing different vehicle types using a database of 72 models in the presence of sensor noise, obscuration and scene clutter.",
"title": ""
},
{
"docid": "neg:1840192_17",
"text": "This article describes a new method for assessing the effect of a given film on viewers’ brain activity. Brain activity was measured using functional magnetic resonance imaging (fMRI) during free viewing of films, and inter-subject correlation analysis (ISC) was used to assess similarities in the spatiotemporal responses across viewers’ brains during movie watching. Our results demonstrate that some films can exert considerable control over brain activity and eye movements. However, this was not the case for all types of motion picture sequences, and the level of control over viewers’ brain activity differed as a function of movie content, editing, and directing style. We propose that ISC may be useful to film studies by providing a quantitative neuroscientific assessment of the impact of different styles of filmmaking on viewers’ brains, and a valuable method for the film industry to better assess its products. Finally, we suggest that this method brings together two separate and largely unrelated disciplines, cognitive neuroscience and film studies, and may open the way for a new interdisciplinary field of “neurocinematic” studies.",
"title": ""
},
{
"docid": "neg:1840192_18",
"text": "The disruptive power of blockchain technologies represents a great opportunity to re-imagine standard practices of providing radio access services by addressing critical areas such as deployment models that can benefit from brand new approaches. As a starting point for this debate, we look at the current limits of infrastructure sharing, and specifically at the Small-Cell-as-a-Service trend, asking ourselves how we could push it to its natural extreme: a scenario in which any individual home or business user can become a service provider for mobile network operators (MNOs), freed from all the scalability and legal constraints that are inherent to the current modus operandi. We propose the adoption of smart contracts to implement simple but effective Service Level Agreements (SLAs) between small cell providers and MNOs, and present an example contract template based on the Ethereum blockchain.",
"title": ""
},
{
"docid": "neg:1840192_19",
"text": "Data access control is an effective way to ensure the data security in the cloud. Due to data outsourcing and untrusted cloud servers, the data access control becomes a challenging issue in cloud storage systems. Ciphertext-Policy Attribute-based Encryption (CP-ABE) is regarded as one of the most suitable technologies for data access control in cloud storage, because it gives data owners more direct control on access policies. However, it is difficult to directly apply existing CP-ABE schemes to data access control for cloud storage systems because of the attribute revocation problem. In this paper, we design an expressive, efficient and revocable data access control scheme for multi-authority cloud storage systems, where there are multiple authorities co-exist and each authority is able to issue attributes independently. Specifically, we propose a revocable multi-authority CP-ABE scheme, and apply it as the underlying techniques to design the data access control scheme. Our attribute revocation method can efficiently achieve both forward security and backward security. The analysis and simulation results show that our proposed data access control scheme is secure in the random oracle model and is more efficient than previous works.",
"title": ""
}
] |
1840193 | Recent Developments in Indian Sign Language Recognition : An Analysis | [
{
"docid": "pos:1840193_0",
"text": "We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation effects and improve our recognition results, we experimented with two different approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results.",
"title": ""
}
] | [
{
"docid": "neg:1840193_0",
"text": "We present a new deep learning approach to pose-guided resynthesis of human photographs. At the heart of the new approach is the estimation of the complete body surface texture based on a single photograph. Since the input photograph always observes only a part of the surface, we suggest a new inpainting method that completes the texture of the human body. Rather than working directly with colors of texture elements, the inpainting network estimates an appropriate source location in the input image for each element of the body surface. This correspondence field between the input image and the texture is then further warped into the target image coordinate frame based on the desired pose, effectively establishing the correspondence between the source and the target view even when the pose change is drastic. The final convolutional network then uses the established correspondence and all other available information to synthesize the output image using a fully-convolutional architecture with deformable convolutions. We show stateof-the-art result for pose-guided image synthesis. Additionally, we demonstrate the performance of our system for garment transfer and pose-guided face resynthesis.",
"title": ""
},
{
"docid": "neg:1840193_1",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "neg:1840193_2",
"text": "We introduce a novel evolutionary algorithm (EA) with a semantic network-based representation. For enabling this, we establish new formulations of EA variation operators, crossover and mutation, that we adapt to work on semantic networks. The algorithm employs commonsense reasoning to ensure all operations preserve the meaningfulness of the networks, using ConceptNet and WordNet knowledge bases. The algorithm can be classified as a novel memetic algorithm (MA), given that (1) individuals represent pieces of information that undergo evolution, as in the original sense of memetics as it was introduced by Dawkins; and (2) this is different from existing MA, where the word “memetic” has been used as a synonym for local refinement after global optimization. For evaluating the approach, we introduce an analogical similarity-based fitness measure that is computed through structure mapping. This setup enables the open-ended generation of networks analogous to a given base network.",
"title": ""
},
{
"docid": "neg:1840193_3",
"text": "Due to the increase of generation sources in distribution networks, it is becoming very complex to develop and maintain models of these networks. Network operators need to determine reduced models of distribution networks to be used in grid management functions. This paper presents a novel method that synthesizes steady-state models of unbalanced active distribution networks with the use of dynamic measurements (time series) from phasor measurement units (PMUs). Since phasor measurement unit (PMU) measurements may contain errors and bad data, this paper presents the application of a Kalman filter technique for real-time data processing. In addition, PMU data capture the power system's response at different time-scales, which are generated by different types of power system events; the presented Kalman filter has been improved to extract the steady-state component of the PMU measurements to be fed to the steady-state model synthesis application. Performance of the proposed methods has been assessed by real-time hardware-in-the-loop simulations on a sample distribution network.",
"title": ""
},
{
"docid": "neg:1840193_4",
"text": "This contribution introduces a novel approach to cross-calibrate automotive vision and ranging sensors. The resulting sensor alignment allows the incorporation of multiple sensor data into a detection and tracking framework. Exemplarily, we show how a realtime vehicle detection system, intended for emergency breaking or ACC applications, benefits from the low level fusion of multibeam lidar and vision sensor measurements in discrimination performance and computational complexity",
"title": ""
},
{
"docid": "neg:1840193_5",
"text": "The evolution of Cloud computing makes the major changes in computing world as with the assistance of basic cloud computing service models like SaaS, PaaS, and IaaS an organization achieves their business goal with minimum effort as compared to traditional computing environment. On the other hand security of the data in the cloud database server is the key area of concern in the acceptance of cloud. It requires a very high degree of privacy and authentication. To protect the data in cloud database server cryptography is one of the important methods. Cryptography provides various symmetric and asymmetric algorithms to secure the data. This paper presents the symmetric cryptographic algorithm named as AES (Advanced Encryption Standard). It is based on several substitutions, permutation and transformation.",
"title": ""
},
{
"docid": "neg:1840193_6",
"text": "Cyber-attacks against Smart Grids have been found in the real world. Malware such as Havex and BlackEnergy have been found targeting industrial control systems (ICS) and researchers have shown that cyber-attacks can exploit vulnerabilities in widely used Smart Grid communication standards. This paper addresses a deep investigation of attacks against the manufacturing message specification of IEC 61850, which is expected to become one of the most widely used communication services in Smart Grids. We investigate how an attacker can build a custom tool to execute man-in-the-middle attacks, manipulate data, and affect the physical system. Attack capabilities are demonstrated based on NESCOR scenarios to make it possible to thoroughly test these scenarios in a real system. The goal is to help understand the potential for such attacks, and to aid the development and testing of cyber security solutions. An attack use-case is presented that focuses on the standard for power utility automation, IEC 61850 in the context of inverter-based distributed energy resource devices; especially photovoltaics (PV) generators.",
"title": ""
},
{
"docid": "neg:1840193_7",
"text": "Galleries, Libraries, Archives and Museums (short: GLAMs) around the globe are beginning to explore the potential of crowdsourcing, i. e. outsourcing specific activities to a community though an open call. In this paper, we propose a typology of these activities, based on an empirical study of a substantial amount of projects initiated by relevant cultural heritage institutions. We use the Digital Content Life Cycle model to study the relation between the different types of crowdsourcing and the core activities of heritage organizations. Finally, we focus on two critical challenges that will define the success of these collaborations between amateurs and professionals: (1) finding sufficient knowledgeable, and loyal users; (2) maintaining a reasonable level of quality. We thus show the path towards a more open, connected and smart cultural heritage: open (the data is open, shared and accessible), connected (the use of linked data allows for interoperable infrastructures, with users and providers getting more and more connected), and smart (the use of knowledge and web technologies allows us to provide interesting data to the right users, in the right context, anytime, anywhere -- both with involved users/consumers and providers). It leads to a future cultural heritage that is open, has intelligent infrastructures and has involved users, consumers and providers.",
"title": ""
},
{
"docid": "neg:1840193_8",
"text": "Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the“over-clustering” of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.",
"title": ""
},
{
"docid": "neg:1840193_9",
"text": "Cloud computing is a new way of delivering computing resources, not a new technology. Computing services ranging from data storage and processing to software, such as email handling, are now available instantly, commitment-free and on-demand. Since we are in a time of belt-tightening, this new economic model for computing has found fertile ground and is seeing massive global investment. According to IDC’s analysis, the worldwide forecast for cloud services in 2009 will be in the order of $17.4bn. The estimation for 2013 amounts to $44.2bn, with the European market ranging from €971m in 2008 to €6,005m in 2013 .",
"title": ""
},
{
"docid": "neg:1840193_10",
"text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.",
"title": ""
},
{
"docid": "neg:1840193_11",
"text": "A circular microstrip array with beam focused for RFID applications was presented. An analogy with the optical lens and optical diffraction was made to describe the behaviour of this system. The circular configuration of the array requires less phase shift and exhibite smaller side lobe level compare to a square array. The measurement result shows a good agreement with simulation and theory. This system is a good way to increase the efficiency of RFID communication without useless power. This solution could also be used to develop RFID devices for the localization problematic. The next step of this work is to design a system with an adjustable focus length.",
"title": ""
},
{
"docid": "neg:1840193_12",
"text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.",
"title": ""
},
{
"docid": "neg:1840193_13",
"text": "An automatic classification system of the music genres is proposed. Based on the timbre features such as mel-frequency cepstral coefficients, the spectro-temporal features are obtained to capture the temporal evolution and variation of the spectral characteristics of the music signal. Mean, variance, minimum, and maximum values of the timbre features are calculated. Modulation spectral flatness, crest, contrast, and valley are estimated for both original spectra and timbre-feature vectors. A support vector machine (SVM) is used as a classifier where an elaborated kernel function is defined. To reduce the computational complexity, an SVM ranker is applied for feature selection. Compared with the best algorithms submitted to the music information retrieval evaluation exchange (MIREX) contests, the proposed method provides higher accuracy at a lower feature dimension for the GTZAN and ISMIR2004 databases.",
"title": ""
},
{
"docid": "neg:1840193_14",
"text": "Chinese-English parallel corpora are key resources for Chinese-English cross-language information processing, Chinese-English bilingual lexicography, Chinese-English language research and teaching. But so far large-scale Chinese-English corpus is still unavailable yet, given the difficulties and the intensive labours required. In this paper, our work towards building a large-scale Chinese-English parallel corpus is presented. We elaborate on the collection, annotation and mark-up of the parallel Chinese-English texts and the workflow that we used to construct the corpus. In addition, we also present our work toward building tools for constructing and using the corpus easily for different purposes. Among these tools, a parallel concordance tool developed by us is examined in detail. Several applications of the corpus being conducted are also introduced briefly in the paper.",
"title": ""
},
{
"docid": "neg:1840193_15",
"text": "The description of a software architecture style must include the structural model of the components and their interactions, the laws governing the dynamic changes in the architecture, and the communication pattern. In our work we represent a system as a graph where hyperedges are components and nodes are ports of communication. The construction and dynamic evolut,ion of the style will be represented as context-free productions and graph rewriting. To model the evolution of the system we propose to use techniques of constraint solving. From this approach we obtain an intuitive way to model systems with nice characteristics for the description of dynamic architectures and reconfiguration and, a unique language to describe the style, model the evolution of the system and prove properties.",
"title": ""
},
{
"docid": "neg:1840193_16",
"text": "We report a novel paper-based biobattery which generates power from microorganism-containing liquid derived from renewable and sustainable wastewater which is readily accessible in the local environment. The device fuses the art of origami and the technology of microbial fuel cells (MFCs) and has the potential to shift the paradigm for flexible and stackable paper-based batteries by enabling exceptional electrical characteristics and functionalities. 3D, modular, and retractable battery stack is created from (i) 2D paper sheets through high degrees of folding and (ii) multifunctional layers sandwiched for MFC device configuration. The stack is based on ninja star-shaped origami design formed by eight MFC modular blades, which is retractable from sharp shuriken (closed) to round frisbee (opened). The microorganism-containing wastewater is added into an inlet of the closed battery stack and it is transported into each MFC module through patterned fluidic pathways in the paper layers. During operation, the battery stack is transformed into the round frisbee to connect eight MFC modules in series for improving the power output and simultaneously expose all air-cathodes to the air for their cathodic reactions. The device generates desired values of electrical current and potential for powering an LED for more than 20min.",
"title": ""
},
{
"docid": "neg:1840193_17",
"text": "The clinical, radiologic, and pathologic findings in radiation injury of the brain are reviewed. Late radiation injury is the major, dose-limiting complication of brain irradiation and occurs in two forms, focal and diffuse, which differ significantly in clinical and radiologic features. Focal and diffuse injuries both include a wide spectrum of abnormalities, from subclinical changes detectable only by MR imaging to overt brain necrosis. Asymptomatic focal edema is commonly seen on CT and MR following focal or large-volume irradiation. Focal necrosis has the CT and MR characteristics of a mass lesion, with clinical evidence of focal neurologic abnormality and raised intracranial pressure. Microscopically, the lesion shows characteristic vascular changes and white matter pathology ranging from demyelination to coagulative necrosis. Diffuse radiation injury is characterized by periventricular decrease in attenuation of CT and increased signal on proton-density and T2-weighted MR images. Most patients are asymptomatic. When clinical manifestations occur, impairment of mental function is the most prominent feature. Pathologic findings in focal and diffuse radiation necrosis are similar. Necrotizing leukoencephalopathy is the form of diffuse white matter injury that follows chemotherapy, with or without irradiation. Vascular disease is less prominent and the latent period is shorter than in diffuse radiation injury; radiologic findings and clinical manifestations are similar. Late radiation injury of large arteries is an occasional cause of postradiation cerebral injury, and cerebral atrophy and mineralizing microangiopathy are common radiologic findings of uncertain clinical significance. Functional imaging by positron emission tomography can differentiate recurrent tumor from focal radiation necrosis with positive and negative predictive values for tumor of 80-90%. Positron emission tomography of the blood-brain barrier, glucose metabolism, and blood flow, together with MR imaging, have demonstrated some of the pathophsiology of late radiation necrosis. Focal glucose hypometabolism on positron emissin tomography in irradiated patients may have prognostic significance for subsequent development of clinically evident radiation necrosis.",
"title": ""
},
{
"docid": "neg:1840193_18",
"text": "BACKGROUND\nPovidone-iodine solution is an antiseptic that is used worldwide as surgical paint and is considered to have a low irritant potential. Post-surgical severe irritant dermatitis has been described after the misuse of this antiseptic in the surgical setting.\n\n\nMETHODS\nBetween January 2011 and June 2013, 27 consecutive patients with post-surgical contact dermatitis localized outside of the surgical incision area were evaluated. Thirteen patients were also available for patch testing.\n\n\nRESULTS\nAll patients developed dermatitis the day after the surgical procedure. Povidone-iodine solution was the only liquid in contact with the skin of our patients. Most typical lesions were distributed in a double lumbar parallel pattern, but they were also found in a random pattern or in areas where a protective pad or an occlusive medical device was glued to the skin. The patch test results with povidone-iodine were negative.\n\n\nCONCLUSIONS\nPovidone-iodine-induced post-surgical dermatitis may be a severe complication after prolonged surgical procedures. As stated in the literature and based on the observation that povidone-iodine-induced contact irritant dermatitis occurred in areas of pooling or occlusion, we speculate that povidone-iodine together with occlusion were the causes of the dermatitis epidemic that occurred in our surgical setting. Povidone-iodine dermatitis is a problem that is easily preventable through the implementation of minimal routine changes to adequately dry the solution in contact with the skin.",
"title": ""
}
] |
1840194 | Haptics: Perception, Devices, Control, and Applications | [
{
"docid": "pos:1840194_0",
"text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.",
"title": ""
}
] | [
{
"docid": "neg:1840194_0",
"text": "BACKGROUND\nCaffeine and sodium bicarbonate ingestion have been suggested to improve high-intensity intermittent exercise, but it is unclear if these ergogenic substances affect performance under provoked metabolic acidification. To study the effects of caffeine and sodium bicarbonate on intense intermittent exercise performance and metabolic markers under exercise-induced acidification, intense arm-cranking exercise was performed prior to intense intermittent running after intake of placebo, caffeine and sodium bicarbonate.\n\n\nMETHODS\nMale team-sports athletes (n = 12) ingested sodium bicarbonate (NaHCO3; 0.4 g.kg(-1) b.w.), caffeine (CAF; 6 mg.kg(-1) b.w.) or placebo (PLA) on three different occasions. Thereafter, participants engaged in intense arm exercise prior to the Yo-Yo intermittent recovery test level-2 (Yo-Yo IR2). Heart rate, blood lactate and glucose as well as rating of perceived exertion (RPE) were determined during the protocol.\n\n\nRESULTS\nCAF and NaHCO3 elicited a 14 and 23% improvement (P < 0.05), respectively, in Yo-Yo IR2 performance, post arm exercise compared to PLA. The NaHCO3 trial displayed higher [blood lactate] (P < 0.05) compared to CAF and PLA (10.5 ± 1.9 vs. 8.8 ± 1.7 and 7.7 ± 2.0 mmol.L(-1), respectively) after the Yo-Yo IR2. At exhaustion CAF demonstrated higher (P < 0.05) [blood glucose] compared to PLA and NaHCO3 (5.5 ± 0.7 vs. 4.2 ± 0.9 vs. 4.1 ± 0.9 mmol.L(-1), respectively). RPE was lower (P < 0.05) during the Yo-Yo IR2 test in the NaHCO3 trial in comparison to CAF and PLA, while no difference in heart rate was observed between trials.\n\n\nCONCLUSIONS\nCaffeine and sodium bicarbonate administration improved Yo-Yo IR2 performance and lowered perceived exertion after intense arm cranking exercise, with greater overall effects of sodium bicarbonate intake.",
"title": ""
},
{
"docid": "neg:1840194_1",
"text": "A first proof-of-concept mm-sized implant based on ultrasonic power transfer and RF uplink data transmission is presented. The prototype consists of a 1 mm × 1 mm piezoelectric receiver, a 1 mm × 2 mm chip designed in 65 nm CMOS and a 2.5 mm × 2.5 mm off-chip antenna, and operates through 3 cm of chicken meat which emulates human tissue. The implant supports a DC load power of 100 μW allowing for high-power applications. It also transmits consecutive UWB pulse sequences activated by the ultrasonic downlink data path, demonstrating sufficient power for an Mary PPM transmitter in uplink.",
"title": ""
},
{
"docid": "neg:1840194_2",
"text": "Gamification, applying game mechanics to nongame contexts, has recently become a hot topic across a wide range of industries, and has been presented as a potential disruptive force in education. It is based on the premise that it can promote motivation and engagement and thus contribute to the learning process. However, research examining this assumption is scarce. In a set of studies we examined the effects of points, a basic element of gamification, on performance in a computerized assessment of mastery and fluency of basic mathematics concepts. The first study, with adult participants, found no effect of the point manipulation on accuracy of responses, although the speed of responses increased. In a second study, with 6e8 grade middle school participants, we found the same results for the two aspects of performance. In addition, middle school participants' reactions to the test revealed higher likeability ratings for the test under the points condition, but only in the first of the two sessions, and perceived effort during the test was higher in the points condition, but only for eighth grade students. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840194_3",
"text": "Network security analysis and ensemble data visualization are two active research areas. Although they are treated as separate domains, they share many common challenges and characteristics. Both focus on scalability, time-dependent data analytics, and exploration of patterns and unusual behaviors in large datasets. These overlaps provide an opportunity to apply ensemble visualization research to improve network security analysis. To study this goal, we propose methods to interpret network security alerts and flow traffic as ensemble members. We can then apply ensemble visualization techniques in a network analysis environment to produce a network ensemble visualization system. Including ensemble representations provide new, in-depth insights into relationships between alerts and flow traffic. Analysts can cluster traffic with similar behavior and identify traffic with unusual patterns, something that is difficult to achieve with high-level overviews of large network datasets. Furthermore, our ensemble approach facilitates analysis of relationships between alerts and flow traffic, improves scalability, maintains accessibility and configurability, and is designed to fit our analysts' working environment, mental models, and problem solving strategies.",
"title": ""
},
{
"docid": "neg:1840194_4",
"text": "Utilizing Big Data scenarios that are generated from increasing digitization and data availability is a core topic in IS research. There are prospective advantages in generating business value from those scenarios through improved decision support and new business models. In order to harvest those potential advantages Big Data capabilities are required, including not only technological aspects of data management and analysis but also strategic and organisational aspects. To assess these capabilities, one can use capability assessment models. Employing a qualitative meta-analysis on existing capability assessment models, it can be revealed that the existing approaches greatly differ in their fundamental structure due to heterogeneous model elements. The heterogeneous elements are therefore synthesized and transformed into consistent assessment dimensions to fulfil the requirements of exhaustive and mutually exclusive aspects of a capability assessment model. As part of a broader research project to develop a consistent and harmonized Big Data Capability Assessment Model (BDCAM) a new design for a capability matrix is proposed including not only capability dimensions but also Big Data life cycle tasks in order to measure specific weaknesses along the process of data-driven value creation.",
"title": ""
},
{
"docid": "neg:1840194_5",
"text": "Today’s networks are filled with a massive and ever-growing variety of network functions that coupled with proprietary devices, which leads to network ossification and difficulty in network management and service provision. Network Function Virtualization (NFV) is a promising paradigm to change such situation by decoupling network functions from the underlying dedicated hardware and realizing them in the form of software, which are referred to as Virtual Network Functions (VNFs). Such decoupling introduces many benefits which include reduction of Capital Expenditure (CAPEX) and Operation Expense (OPEX), improved flexibility of service provision, etc. In this paper, we intend to present a comprehensive survey on NFV, which starts from the introduction of NFV motivations. Then, we explain the main concepts of NFV in terms of terminology, standardization and history, and how NFV differs from traditional middlebox based network. After that, the standard NFV architecture is introduced using a bottom up approach, based on which the corresponding use cases and solutions are also illustrated. In addition, due to the decoupling of network functionalities and hardware, people’s attention is gradually shifted to the VNFs. Next, we provide an extensive and in-depth discussion on state-of-the-art VNF algorithms including VNF placement, scheduling, migration, chaining and multicast. Finally, to accelerate the NFV deployment and avoid pitfalls as far as possible, we survey the challenges faced by NFV and the trend for future directions. In particular, the challenges are discussed from bottom up, which include hardware design, VNF deployment, VNF life cycle control, service chaining, performance evaluation, policy enforcement, energy efficiency, reliability and security, and the future directions are discussed around the current trend towards network softwarization. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840194_6",
"text": "Activists have used social media during modern civil uprisings, and researchers have found that the generated content is predictive of off-line protest activity. However, questions remain regarding the drivers of this predictive power. In this paper, we begin by deriving predictor variables for individuals’ protest decisions from the literature on protest participation theory. We then test these variables on the case of Twitter and the 2011 Egyptian revolution. We find significant positive correlations between the volume of future-protest descriptions on Twitter and protest onsets. We do not find significant correlations between such onsets and the preceding volume of political expressions, individuals’ access to news, and connections with political activists. These results locate the predictive power of social media in its function as a protest advertisement and organization mechanism. We then build predictive models using future-protest descriptions and compare these models with baselines informed by daily event counts from the Global Database of Events, Location, and Tone (GDELT). Inspection of the significant variables in our GDELT models reveals that an increased military presence may be predictive of protest onsets in major cities. In sum, this paper highlights the ways in which online activism shapes off-line behavior during civil uprisings.",
"title": ""
},
{
"docid": "neg:1840194_7",
"text": "Average public feedback scores given to sellers have increased strongly over time in an online labor market. Changes in marketplace composition or improved seller performance cannot fully explain this trend. We propose that two factors inflated reputations: (1) it costs more to give bad feedback than good feedback and (2) this cost to raters is increasing in the cost to sellers from bad feedback. Together, (1) and (2) can lead to an equilibrium where feedback is always positive, regardless of performance. In response, the marketplace encouraged buyers to additionally give private feedback. This private feedback was substantially more candid and more predictive of future worker performance. When aggregates of private feedback about each job applicant were experimentally provided to employers as a private feedback score, employers used these scores when making screening and hiring decisions.",
"title": ""
},
{
"docid": "neg:1840194_8",
"text": "The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow. Inspired by recent advances in deep learning, we propose a framework for reconstructing MR images from undersampled data using a deep cascade of convolutional neural networks to accelerate the data acquisition process. We show that for Cartesian undersampling of 2D cardiac MR images, the proposed method outperforms the state-of-the-art compressed sensing approaches, such as dictionary learning-based MRI (DLMRI) reconstruction, in terms of reconstruction error, perceptual quality and reconstruction speed for both 3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the method proposed is approximately twice as small, allowing to preserve anatomical structures more faithfully. Using our method, each image can be reconstructed in 23ms, which is fast enough to enable real-time applications.",
"title": ""
},
{
"docid": "neg:1840194_9",
"text": "Most of current concepts for a visual prosthesis are based on neuronal electrical stimulation at different locations along the visual pathways within the central nervous system. The different designs of visual prostheses are named according to their locations (i.e., cortical, optic nerve, subretinal, and epiretinal). Visual loss caused by outer retinal degeneration in diseases such as retinitis pigmentosa or age-related macular degeneration can be reversed by electrical stimulation of the retina or the optic nerve (retinal or optic nerve prostheses, respectively). On the other hand, visual loss caused by inner or whole thickness retinal diseases, eye loss, optic nerve diseases (tumors, ischemia, inflammatory processes etc.), or diseases of the central nervous system (not including diseases of the primary and secondary visual cortices) can be reversed by a cortical visual prosthesis. The intent of this article is to provide an overview of current and future concepts of retinal and optic nerve prostheses. This article will begin with general considerations that are related to all or most of visual prostheses and then concentrate on the retinal and optic nerve designs. The authors believe that the field has grown beyond the scope of a single article so cortical prostheses will be described only because of their direct effect on the concept and technical development of the other prostheses, and this will be done in a more general and historic perspective.",
"title": ""
},
{
"docid": "neg:1840194_10",
"text": "Unconventional machining processes (communally named advanced or modern machining processes) are widely used by manufacturing industries. These advanced machining processes allow producing complex profiles and high quality-products. However, several process parameters should be optimized to achieve this end. In this paper, the optimization of process parameters of two conventional and four advanced machining processes is investigated: drilling process, grinding process, abrasive jet machining (AJM), abrasive water jet machining (AWJM), ultrasonic machining (USM), and water jet machining (WJM), respectively. This research employed two bio-inspired algorithms called the cuckoo optimization algorithm (COA) and the hoopoe heuristic (HH) to optimize the machining control parameters of these processes. The obtained results are compared with other optimization algorithms described and applied in the literature.",
"title": ""
},
{
"docid": "neg:1840194_11",
"text": "Many state-of-the-art segmentation algorithms rely on Markov or Conditional Random Field models designed to enforce spatial and global consistency constraints. This is often accomplished by introducing additional latent variables to the model, which can greatly increase its complexity. As a result, estimating the model parameters or computing the best maximum a posteriori (MAP) assignment becomes a computationally expensive task. In a series of experiments on the PASCAL and the MSRC datasets, we were unable to find evidence of a significant performance increase attributed to the introduction of such constraints. On the contrary, we found that similar levels of performance can be achieved using a much simpler design that essentially ignores these constraints. This more simple approach makes use of the same local and global features to leverage evidence from the image, but instead directly biases the preferences of individual pixels. While our investigation does not prove that spatial and consistency constraints are not useful in principle, it points to the conclusion that they should be validated in a larger context.",
"title": ""
},
{
"docid": "neg:1840194_12",
"text": "Humans can easily recognize handwritten words, after gaining basic knowledge of languages. This knowledge needs to be transferred to computers for automatic character recognition. The work proposed in this paper tries to automate recognition of handwritten hindi isolated characters using multiple classifiers. For feature extraction, it uses histogram of oriented gradients as one feature and profile projection histogram as another feature. The performance of various classifiers has been evaluated using theses features experimentally and quadratic SVM has been found to produce better results.",
"title": ""
},
{
"docid": "neg:1840194_13",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "neg:1840194_14",
"text": "The kind of causal inference seen in natural human thought can be \"algorithmitized\" to help produce human-level machine intelligence.",
"title": ""
},
{
"docid": "neg:1840194_15",
"text": "dent processes and domain-specific knowledge. Until recently, information extraction has leaned heavily on domain knowledge, which requires either manual engineering or manual tagging of examples (Miller et al. 1998; Soderland 1999; Culotta, McCallum, and Betz 2006). Semisupervised approaches (Riloff and Jones 1999, Agichtein and Gravano 2000, Rosenfeld and Feldman 2007) require only a small amount of hand-annotated training, but require this for every relation of interest. This still presents a knowledge engineering bottleneck, when one considers the unbounded number of relations in a diverse corpus such as the web. Shinyama and Sekine (2006) explored unsupervised relation discovery using a clustering algorithm with good precision, but limited scalability. The KnowItAll research group is a pioneer of a new paradigm, Open IE (Banko et al. 2007, Banko and Etzioni 2008), that operates in a totally domain-independent manner and at web scale. An Open IE system makes a single pass over its corpus and extracts a diverse set of relational tuples without requiring any relation-specific human input. Open IE is ideally suited to corpora such as the web, where the target relations are not known in advance and their number is massive. Articles",
"title": ""
},
{
"docid": "neg:1840194_16",
"text": "We extend photometric stereo to make it work with internet images, which are typically associated with different viewpoints and significant noise. For popular tourism sites, thousands of images can be obtained from internet search engines. With these images, our method computes the global illumination for each image and the surface orientation at some scene points. The illumination information can then be used to estimate the weather conditions (such as sunny or cloudy) for each image, since there is a strong correlation between weather and scene illumination. We demonstrate our method on several challenging examples.",
"title": ""
},
{
"docid": "neg:1840194_17",
"text": "URB-754 (6-methyl-2-[(4-methylphenyl)amino]-1-benzoxazin-4-one) was identified as a new type of designer drug in illegal products. Though many of the synthetic cannabinoids detected in illegal products are known to have affinities for cannabinoid CB1/CB2 receptors, URB-754 was reported to inhibit an endocannabinoid deactivating enzyme. Furthermore, an unknown compound (N,5-dimethyl-N-(1-oxo-1-(p-tolyl)butan-2-yl)-2-(N'-(p-tolyl)ureido)benzamide), which is deduced to be the product of a reaction between URB-754 and a cathinone derivative 4-methylbuphedrone (4-Me-MABP), was identified along with URB-754 and 4-Me-MABP in the same product. It is of interest that the product of a reaction between two different types of designer drugs, namely, a cannabinoid-related designer drug and a cathinone-type designer drug, was found in one illegal product. In addition, 12 cannabimimetic compounds, 5-fluoropentyl-3-pyridinoylindole, JWH-307, JWH-030, UR-144, 5FUR-144 (synonym: XLR11), (4-methylnaphtyl)-JWH-022 [synonym: N-(5-fluoropentyl)-JWH-122], AM-2232, (4-methylnaphtyl)-AM-2201 (MAM-2201), N-(4-pentenyl)-JWH-122, JWH-213, (4-ethylnaphtyl)-AM-2201 (EAM-2201) and AB-001, were also detected herein as newly distributed designer drugs in Japan. Furthermore, a tryptamine derivative, 4-hydroxy-diethyltryptamine (4-OH-DET), was detected together with a synthetic cannabinoid, APINACA, in the same product.",
"title": ""
},
{
"docid": "neg:1840194_18",
"text": "Dry electrodes for impedimetric sensing of physiological parameters (such as ECG, EEG, and GSR) promise the ability for long duration monitoring. This paper describes the feasibility of a novel dry electrode interfacing using Patterned Vertical Carbon Nanotube (pvCNT) for physiological parameter sensing. The electrodes were fabricated on circular discs (φ = 10 mm) stainless steel substrate. Multiwalled electrically conductive carbon nanotubes were grown in pattered pillar formation of 100 μm squared with 50, 100, 200 and 500μm spacing. The heights of the pillars were between 1 to 1.5 mm. A comparative test with commercial ECG electrodes shows that pvCNT has lower electrical impedance, stable impedance over very long time, and comparable signal capture in vitro. Long duration study shows minimal degradation of impedance over 2 days period. The results demonstrate the feasibility of using pvCNT dry electrodes for physiological parameter sensing.",
"title": ""
},
{
"docid": "neg:1840194_19",
"text": "Development of controlled release transdermal dosage form is a complex process involving extensive research. Transdermal patches have been developed to improve clinical efficacy of the drug and to enhance patient compliance by delivering smaller amount of drug at a predetermined rate. This makes evaluation studies even more important in order to ensure their desired performance and reproducibility under the specified environmental conditions. These studies are predictive of transdermal dosage forms and can be classified into following types:",
"title": ""
}
] |
1840195 | Validating the independent components of neuroimaging time series via clustering and visualization | [
{
"docid": "pos:1840195_0",
"text": "Cluster structure of gene expression data obtained from DNA microarrays is analyzed and visualized with the Self-Organizing Map (SOM) algorithm. The SOM forms a non-linear mapping of the data to a two-dimensional map grid that can be used as an exploratory data analysis tool for generating hypotheses on the relationships, and ultimately of the function of the genes. Similarity relationships within the data and cluster structures can be visualized and interpreted. The methods are demonstrated by computing a SOM of yeast genes. The relationships of known functional classes of genes are investigated by analyzing their distribution on the SOM, the cluster structure is visualized by the U-matrix method, and the clusters are characterized in terms of the properties of the expression profiles of the genes. Finally, it is shown that the SOM visualizes the similarity of genes in a more trustworthy way than two alternative methods, multidimensional scaling and hierarchical clustering.",
"title": ""
}
] | [
{
"docid": "neg:1840195_0",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "neg:1840195_1",
"text": "Adverse childhood experiences (ACEs) have been linked with risky health behaviors and the development of chronic diseases in adulthood. This study examined associations between ACEs, chronic diseases, and risky behaviors in adults living in Riyadh, Saudi Arabia in 2012 using the ACE International Questionnaire (ACE-IQ). A cross-sectional design was used, and adults who were at least 18 years of age were eligible to participate. ACEs event scores were measured for neglect, household dysfunction, abuse (physical, sexual, and emotional), and peer and community violence. The ACE-IQ was supplemented with questions on risky health behaviors, chronic diseases, and mood. A total of 931 subjects completed the questionnaire (a completion rate of 88%); 57% of the sample was female, 90% was younger than 45 years, 86% had at least a college education, 80% were Saudi nationals, and 58% were married. One-third of the participants (32%) had been exposed to 4 or more ACEs, and 10%, 17%, and 23% had been exposed to 3, 2, or 1 ACEs respectively. Only 18% did not have an ACE. The prevalence of risky health behaviors ranged between 4% and 22%. The prevalence of self-reported chronic diseases ranged between 6% and 17%. Being exposed to 4 or more ACEs increased the risk of having chronic diseases by 2-11 fold, and increased risky health behaviors by 8-21 fold. The findings of this study will contribute to the planning and development of programs to prevent child maltreatment and to alleviate the burden of chronic diseases in adults.",
"title": ""
},
{
"docid": "neg:1840195_2",
"text": "Routing Protocol for Low Power and Lossy Networks (RPL) is the routing protocol for IoT and Wireless Sensor Networks. RPL is a lightweight protocol, having good routing functionality, but has basic security functionality. This may make RPL vulnerable to various attacks. Providing security to IoT networks is challenging, due to their constrained nature and connectivity to the unsecured internet. This survey presents the elaborated review on the security of Routing Protocol for Low Power and Lossy Networks (RPL). This survey is built upon the previous work on RPL security and adapts to the security issues and constraints specific to Internet of Things. An approach to classifying RPL attacks is made based on Confidentiality, Integrity, and Availability. Along with that, we surveyed existing solutions to attacks which are evaluated and given possible solutions (theoretically, from various literature) to the attacks which are not yet evaluated. We further conclude with open research challenges and future work needs to be done in order to secure RPL for Internet of Things (IoT).",
"title": ""
},
{
"docid": "neg:1840195_3",
"text": "Public-key cryptography is indispensable for cyber security. However, as a result of Peter Shor shows, the public-key schemes that are being used today will become insecure once quantum computers reach maturity. This paper gives an overview of the alternative public-key schemes that have the capability to resist quantum computer attacks and compares them.",
"title": ""
},
{
"docid": "neg:1840195_4",
"text": "A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts. Research in natural language dialogue should focus on designing communicative agents which can integrate themselves into these contexts and productively collaborate with humans. In this abstract, we propose a general situated language learning paradigm which is designed to bring about robust language agents able to cooperate productively with humans. This dialogue paradigm is built on a utilitarian definition of language understanding. Language is one of multiple tools which an agent may use to accomplish goals in its environment. We say an agent “understands” language only when it is able to use language productively to accomplish these goals. Under this definition, an agent’s communication success reduces to its success on tasks within its environment. This setup contrasts with many conventional natural language tasks, which maximize linguistic objectives derived from static datasets. Such applications often make the mistake of reifying language as an end in itself. The tasks prioritize an isolated measure of linguistic intelligence (often one of linguistic competence, in the sense of Chomsky (1965)), rather than measuring a model’s effectiveness in real-world scenarios. Our utilitarian definition is motivated by recent successes in reinforcement learning methods. In a reinforcement learning setting, agents maximize success metrics on real-world tasks, without requiring direct supervision of linguistic behavior.",
"title": ""
},
{
"docid": "neg:1840195_5",
"text": "Web browsers show HTTPS authentication warnings (i.e., SSL warnings) when the integrity and confidentiality of users' interactions with websites are at risk. Our goal in this work is to decrease the number of users who click through the Google Chrome SSL warning. Prior research showed that the Mozilla Firefox SSL warning has a much lower click-through rate (CTR) than Chrome. We investigate several factors that could be responsible: the use of imagery, extra steps before the user can proceed, and style choices. To test these factors, we ran six experimental SSL warnings in Google Chrome 29 and measured 130,754 impressions.",
"title": ""
},
{
"docid": "neg:1840195_6",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "neg:1840195_7",
"text": "The first objective of the paper is to identifiy a number of issues related to crowdfunding that are worth studying from an industrial organization (IO) perspective. To this end, we first propose a definition of crowdfunding; next, on the basis on a previous empirical study, we isolate what we believe are the main features of crowdfunding; finally, we point to a number of strands of the literature that could be used to study the various features of crowdfunding. The second objective of the paper is to propose some preliminary efforts towards the modelization of crowdfunding. In a first model, we associate crowdfunding with pre-ordering and price discrimination, and we study the conditions under which crowdfunding is preferred to traditional forms of external funding. In a second model, we see crowdfunding as a way to make a product better known by the consumers and we give some theoretical underpinning for the empirical finding that non-profit organizations tend to be more successful in using crowdfunding. JEL classification codes: G32, L11, L13, L15, L21, L31",
"title": ""
},
{
"docid": "neg:1840195_8",
"text": "Biometrics technology is keep growing substantially in the last decades with great advances in biometric applications. An accurate personal authentication or identification has become a critical step in a wide range of applications such as national ID, electronic commerce, and automated and remote banking. The recent developments in the biometrics area have led to smaller, faster, and cheaper systems such as mobile device systems. As a kind of human biometrics for personal identification, fingerprint is the dominant trait due to its simplicity to be captured, processed, and extracted without violating user privacy. In a wide range of applications of fingerprint recognition, including civilian and forensics implementations, a large amount of fingerprints are collected and stored everyday for different purposes. In Automatic Fingerprint Identification System (AFIS) with a large database, the input image is matched with all fields inside the database to identify the most potential identity. Although satisfactory performances have been reported for fingerprint authentication (1:1 matching), both time efficiency and matching accuracy deteriorate seriously by simple extension of a 1:1 authentication procedure to a 1:N identification system (Manhua, 2010). The system response time is the key issue of any AFIS, and it is often improved by controlling the accuracy of the identification to satisfy the system requirement. In addition to developing new technologies, it is necessary to make clear the trade-off between the response time and the accuracy in fingerprint identification systems. Moreover, from the versatility and developing cost points of view, the trade-off should be realized in terms of system design, implementation, and usability. Fingerprint classification is one of the standard approaches to speed up the matching process between the input sample and the collected database (K. Jain et al., 2007). Fingerprint classification is considered as indispensable step toward reducing the search time through large fingerprint databases. It refers to the problem of assigning fingerprint to one of several pre-specified classes, and it presents an interesting problem in pattern recognition, especially in the real and time sensitive applications that require small response time. Fingerprint classification process works on narrowing down the search domain into smaller database subsets, and hence speeds up the total response time of any AFIS. Even for",
"title": ""
},
{
"docid": "neg:1840195_9",
"text": "A long-standing challenge in coreference resolution has been the incorporation of entity-level information – features defined over clusters of mentions instead of mention pairs. We present a neural network based coreference system that produces high-dimensional vector representations for pairs of coreference clusters. Using these representations, our system learns when combining clusters is desirable. We train the system with a learning-to-search algorithm that teaches it which local decisions (cluster merges) will lead to a high-scoring final coreference partition. The system substantially outperforms the current state-of-the-art on the English and Chinese portions of the CoNLL 2012 Shared Task dataset despite using few hand-engineered features.",
"title": ""
},
{
"docid": "neg:1840195_10",
"text": "There is a clear trend in the automotive industry to use more electrical systems in order to satisfy the ever-growing vehicular load demands. Thus, it is imperative that automotive electrical power systems will obviously undergo a drastic change in the next 10-20 years. Currently, the situation in the automotive industry is such that the demands for higher fuel economy and more electric power are driving advanced vehicular power system voltages to higher levels. For example, the projected increase in total power demand is estimated to be about three to four times that of the current value. This means that the total future power demand of a typical advanced vehicle could roughly reach a value as high as 10 kW. In order to satisfy this huge vehicular load, the approach is to integrate power electronics intensive solutions within advanced vehicular power systems. In view of this fact, this paper aims at reviewing the present situation as well as projected future research and development work of advanced vehicular electrical power systems including those of electric, hybrid electric, and fuel cell vehicles (EVs, HEVs, and FCVs). The paper will first introduce the proposed power system architectures for HEVs and FCVs and will then go on to exhaustively discuss the specific applications of dc/dc and dc/ac power electronic converters in advanced automotive power systems",
"title": ""
},
{
"docid": "neg:1840195_11",
"text": "We give an overview of the scripting languages used in existing cryptocurrencies, and in particular we review in some detail the scripting languages of Bitcoin, Nxt and Ethereum, in the context of a high-level overview of Distributed Ledger Technology and cryptocurrencies. We survey different approaches, and give an overview of critiques of existing languages. We also cover technologies that might be used to underpin extensions and innovations in scripting and contracts, including technologies for verification, such as zero knowledge proofs, proof-carrying code and static analysis, as well as approaches to making systems more efficient, e.g. Merkelized Abstract Syntax Trees.",
"title": ""
},
{
"docid": "neg:1840195_12",
"text": "We develop a dynamic optimal control model of a fashion designers challenge of maintaining brand image in the face of short-term pro
t opportunities through expanded sales that risk brand dilution in the longer-run. The key state variable is the brands reputation, and the key decision is sales volume. Depending on the brands capacity to command higher prices, one of two regimes is observed. If the price mark-ups relative to production costs are modest, then the optimal solution may simply be to exploit whatever value can be derived from the brand in the short-run and retire the brand when that capacity is fully diluted. However, if the price markups are more substantial, then an existing brand should be preserved.",
"title": ""
},
{
"docid": "neg:1840195_13",
"text": "Traditional passive radar detectors compute cross correlation of the raw data in the reference and surveillance channels. However, there is no optimality guarantee for this detector in the presence of a noisy reference. Here, we develop a new detector that utilizes a test statistic based on the cross correlation of the principal left singular vectors of the reference and surveillance signal-plus-noise matrices. This detector offers better performance by exploiting the inherent low-rank structure when the transmitted signals are a weighted periodic summation of several identical waveforms (amplitude and phase modulation), as is the case with commercial digital illuminators as well as noncooperative radar. We consider a scintillating target. We provide analytical detection performance guarantees establishing signal-to-noise ratio thresholds above which the proposed detection statistic reliably discriminates, in an asymptotic sense, the signal versus no-signal hypothesis. We validate these results using extensive numerical simulations. We demonstrate the “near constant false alarm rate (CFAR)” behavior of the proposed detector with respect to a fixed, SNR-independent threshold and contrast that with the need to adjust the detection threshold in an SNR-dependent manner to maintain CFAR for other detectors found in the literature. Extensions of the proposed detector for settings applicable to orthogonal frequency division multiplexing (OFDM), adaptive radar are discussed.",
"title": ""
},
{
"docid": "neg:1840195_14",
"text": "Interdigitated capacitors (IDC) are extensively used for a variety of chemical and biological sensing applications. Printing and functionalizing these IDC sensors on bendable substrates will lead to new innovations in healthcare and medicine, food safety inspection, environmental monitoring, and public security. The synthesis of an electrically conductive aqueous graphene ink stabilized in deionized water using the polymer Carboxymethyl Cellulose (CMC) is introduced in this paper. CMC is a nontoxic hydrophilic cellulose derivative used in food industry. The water-based graphene ink is then used to fabricate IDC sensors on mechanically flexible polyimide substrates. The capacitance and frequency response of the sensors are analyzed, and the effect of mechanical stress on the electrical properties is examined. Experimental results confirm low thin film resistivity (~6;.6×10-3 Ω-cm) and high capacitance (>100 pF). The printed sensors are then used to measure water content of ethanol solutions to demonstrate the proposed conductive ink and fabrication methodology for creating chemical sensors on thin membranes.",
"title": ""
},
{
"docid": "neg:1840195_15",
"text": "BACKGROUND\nThe number of mental health apps (MHapps) developed and now available to smartphone users has increased in recent years. MHapps and other technology-based solutions have the potential to play an important part in the future of mental health care; however, there is no single guide for the development of evidence-based MHapps. Many currently available MHapps lack features that would greatly improve their functionality, or include features that are not optimized. Furthermore, MHapp developers rarely conduct or publish trial-based experimental validation of their apps. Indeed, a previous systematic review revealed a complete lack of trial-based evidence for many of the hundreds of MHapps available.\n\n\nOBJECTIVE\nTo guide future MHapp development, a set of clear, practical, evidence-based recommendations is presented for MHapp developers to create better, more rigorous apps.\n\n\nMETHODS\nA literature review was conducted, scrutinizing research across diverse fields, including mental health interventions, preventative health, mobile health, and mobile app design.\n\n\nRESULTS\nSixteen recommendations were formulated. Evidence for each recommendation is discussed, and guidance on how these recommendations might be integrated into the overall design of an MHapp is offered. Each recommendation is rated on the basis of the strength of associated evidence. It is important to design an MHapp using a behavioral plan and interactive framework that encourages the user to engage with the app; thus, it may not be possible to incorporate all 16 recommendations into a single MHapp.\n\n\nCONCLUSIONS\nRandomized controlled trials are required to validate future MHapps and the principles upon which they are designed, and to further investigate the recommendations presented in this review. Effective MHapps are required to help prevent mental health problems and to ease the burden on health systems.",
"title": ""
},
{
"docid": "neg:1840195_16",
"text": "The growing importance of the service sector in almost every economy in the world has created a significant amount of interest in service operations. In practice, many service sectors have sought and made use of various enhancement programs to improve their operations and performance in an attempt to hold competitive success. As most researchers recognize, service operations link with customers. The customers as participants act in the service operations system driven by the goal of sufficing his/her added values. This is one of the distinctive features of service production and consumption. In the paper, first, we propose the idea of service operations improvement by mapping objectively the service experience of customers from the view of customer journey. Second, a portraying scheme of service experience of customers based on the IDEF3 technique is proposed, and last, some implications on service operations improvement are",
"title": ""
},
{
"docid": "neg:1840195_17",
"text": "This article reviews recent advances in convex optimization algorithms for big data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques such as first-order methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new big data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems.",
"title": ""
},
{
"docid": "neg:1840195_18",
"text": "This study presents a novel four-fingered robotic hand to attain a soft contact and high stability under disturbances while holding an object. Each finger is constructed using a tendon-driven skeleton, granular materials corresponding to finger pulp, and a deformable rubber skin. This structure provides soft contact with an object, as well as high adaptation to its shape. Even if the object is deformable and fragile, a grasping posture can be formed without deforming the object. If the air around the granular materials in the rubber skin and jamming transition is vacuumed, the grasping posture can be fixed and the object can be grasped firmly and stably. A high grasping stability under disturbances can be attained. Additionally, the fingertips can work as a small jamming gripper to grasp an object smaller than a fingertip. An experimental investigation indicated that the proposed structure provides a high grasping force with a jamming transition with high adaptability to the object's shape.",
"title": ""
},
{
"docid": "neg:1840195_19",
"text": "When speaking and reasoning about time, people around the world tend to do so with vocabulary and concepts borrowed from the domain of space. This raises the question of whether the cross-linguistic variability found for spatial representations, and the principles on which these are based, may also carry over to the domain of time. Real progress in addressing this question presupposes a taxonomy for the possible conceptualizations in one domain and its consistent and comprehensive mapping onto the other-a challenge that has been taken up only recently and is far from reaching consensus. This article aims at systematizing the theoretical and empirical advances in this field, with a focus on accounts that deal with frames of reference (FoRs). It reviews eight such accounts by identifying their conceptual ingredients and principles for space-time mapping, and it explores the potential for their integration. To evaluate their feasibility, data from some thirty empirical studies, conducted with speakers of sixteen different languages, are then scrutinized. This includes a critical assessment of the methods employed, a summary of the findings for each language group, and a (re-)analysis of the data in view of the theoretical questions. The discussion relates these findings to research on the mental time line, and explores the psychological reality of temporal FoRs, the degree of cross-domain consistency in FoR adoption, the role of deixis, and the sources and extent of space-time mapping more generally.",
"title": ""
}
] |
1840196 | Exploring Gameplay Experiences on the Oculus Rift | [
{
"docid": "pos:1840196_0",
"text": "Despite the word's common usage by gamers and reviewers alike, it is still not clear what immersion means. This paper explores immersion further by investigating whether immersion can be defined quantitatively, describing three experiments in total. The first experiment investigated participants' abilities to switch from an immersive to a non-immersive task. The second experiment investigated whether there were changes in participants' eye movements during an immersive task. The third experiment investigated the effect of an externally imposed pace of interaction on immersion and affective measures (state-anxiety, positive affect, negative affect). Overall the findings suggest that immersion can be measured subjectively (through questionnaires) as well as objectively (task completion time, eye movements). Furthermore, immersion is not only viewed as a positive experience: negative emotions and uneasiness (i.e. anxiety) also run high.",
"title": ""
}
] | [
{
"docid": "neg:1840196_0",
"text": "Several web-based platforms have emerged to ease the development of interactive or near real-time IoT applications by providing a way to connect things and services together and process the data they emit using a data flow paradigm. While these platforms have been found to be useful on their own, many IoT scenarios require the coordination of computing resources across the network: on servers, gateways and devices themselves. To address this, we explore how to extend existing IoT data flow platforms to create a system suitable for execution on a range of run time environments, toward supporting distributed IoT programs that can be partitioned between servers, gateways and devices. Eventually we aim to automate the distribution of data flows using appropriate distribution mechanism, and optimization heuristics based on participating resource capabilities and constraints imposed by the developer.",
"title": ""
},
{
"docid": "neg:1840196_1",
"text": "Performing correct anti-predator behaviour is crucial for prey to survive. But are such abilities lost in species or populations living in predator-free environments? How individuals respond to the loss of predators has been shown to depend on factors such as the degree to which anti-predator behaviour relies on experience, the type of cues evoking the behaviour, the cost of expressing the behaviour and the number of generations under which the relaxed selection has taken place. Here we investigated whether captive-born populations of meerkats (Suricata suricatta) used the same repertoire of alarm calls previously documented in wild populations and whether captive animals, as wild ones, could recognize potential predators through olfactory cues. We found that all alarm calls that have been documented in the wild also occurred in captivity and were given in broadly similar contexts. Furthermore, without prior experience of odours from predators, captive meerkats seemed to dist inguish between faeces of potential predators (carnivores) and non-predators (herbivores). Despite slight structural differences, the alarm calls given in response to the faeces largely resembled those recorded in similar contexts in the wild. These results from captive populations suggest that direct, physical interaction with predators is not necessary for meerkats to perform correct anti-predator behaviour in terms of alarm-call usage and olfactory predator recognition. Such behaviour may have been retained in captivity because relatively little experience seems necessary for correct performance in the wild and/or because of the recency of relaxed selection on these populations. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-282 Accepted Version Originally published at: Hollén, L I; Manser, M B (2007). Persistence of Alarm-Call Behaviour in the Absence of Predators: A Comparison Between Wild and Captive-Born Meerkats (Suricata suricatta). Ethology, 113(11):10381047. DOI: https://doi.org/10.1111/j.1439-0310.2007.01409.x",
"title": ""
},
{
"docid": "neg:1840196_2",
"text": "Nowadays, many organizations face the issue of information and communication technology (ICT) management. The organization undertakes various activities to assess the state of their ICT or the entire information system (IS), such as IS audits, reviews, IS due diligence, or they already have implemented systems to gather information regarding their IS. For the organizations’ boards and managers there is an issue how to evaluate current IS maturity level and based on assessments, define the IS strategy how to reach the maturity target level. The problem is, what kind of approach to use for rapid and effective IS maturity assessment. This paper summarizes the research regarding IS maturity within different organizations where the authors have delivered either complete IS due diligence or made partial analysis by IS Mirror method. The main objective of this research is to present and confirm the approach which could be used for effective IS maturity assessment and could be provided quickly and even remotely. The paper presented research question, related hypothesis, and approach for rapid IS maturity assessment, and results from several case studies made on-site or remotely.",
"title": ""
},
{
"docid": "neg:1840196_3",
"text": "A multi-armed bandit is an experiment with the goal of accumulating rewards from a payoff distribution with unknown parameters that are to be learned sequentially. This article describes a heuristic for managing multi-armed bandits called randomized probability matching, which randomly allocates observations to arms according the Bayesian posterior probability that each arm is optimal. Advances in Bayesian computation have made randomized probability matching easy to apply to virtually any payoff distribution. This flexibility frees the experimenter to work with payoff distributions that correspond to certain classical experimental designs that have the potential to outperform methods that are ‘optimal’ in simpler contexts. I summarize the relationships between randomized probability matching and several related heuristics that have been used in the reinforcement learning literature. Copyright q 2010 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840196_4",
"text": "We review the nosological criteria and functional neuroanatomical basis for brain death, coma, vegetative state, minimally conscious state, and the locked-in state. Functional neuroimaging is providing new insights into cerebral activity in patients with severe brain damage. Measurements of cerebral metabolism and brain activations in response to sensory stimuli with PET, fMRI, and electrophysiological methods can provide information on the presence, degree, and location of any residual brain function. However, use of these techniques in people with severe brain damage is methodologically complex and needs careful quantitative analysis and interpretation. In addition, ethical frameworks to guide research in these patients must be further developed. At present, clinical examinations identify nosological distinctions needed for accurate diagnosis and prognosis. Neuroimaging techniques remain important tools for clinical research that will extend our understanding of the underlying mechanisms of these disorders.",
"title": ""
},
{
"docid": "neg:1840196_5",
"text": "Recurrent Neural Networks (RNNs) have become the state-of-the-art choice for extracting patterns from temporal sequences. However, current RNN models are ill-suited to process irregularly sampled data triggered by events generated in continuous time by sensors or other neurons. Such data can occur, for example, when the input comes from novel event-driven artificial sensors that generate sparse, asynchronous streams of events or from multiple conventional sensors with different update intervals. In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate. This gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime.",
"title": ""
},
{
"docid": "neg:1840196_6",
"text": "Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10--30% of the time on average, but they are often left on, while idle, utilizing 60% or more of peak power when in the idle state.\n We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency.\n We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.",
"title": ""
},
{
"docid": "neg:1840196_7",
"text": "Advances in anatomic understanding are frequently the basis upon which surgical techniques are advanced and refined. Recent anatomic studies of the superficial tissues of the face have led to an increased understanding of the compartmentalized nature of the subcutaneous fat. This report provides a review of the locations and characteristics of the facial fat compartments and provides examples of how this knowledge can be used clinically, specifically with regard to soft tissue fillers.",
"title": ""
},
{
"docid": "neg:1840196_8",
"text": "In this paper, a linear time-varying model-based predictive controller (LTV-MPC) for lateral vehicle guidance by front steering is proposed. Due to the fact that this controller is designed in the scope of a Collision Avoidance System, it has to fulfill the requirement of an appropiate control performance at the limits of vehicle dynamics. To achieve this objective, the introduced approach employs estimations of the applied steering angle as well as the state variable trajectory to be used for successive linearization of the nonlinear prediction model over the prediction horizon. To evaluate the control performance, the proposed controller is compared to a LTV-MPC controller that uses linearizations of the nonlinear prediction model that remain unchanged over the prediction horizon. Simulation results show that an improved control performance can be achieved by the estimation based approach.",
"title": ""
},
{
"docid": "neg:1840196_9",
"text": "Throughout the history of the social evolution, man and animals come into frequent contact that forms an interdependent relationship between man and animals. The images of animals root in the everyday life of all nations, forming unique animal culture of each nation. Therefore, Chinese and English, as the two languages which spoken by the most people in the world, naturally contain a lot of words relating to animals, and because of different history and culture, the connotations of animal words in one language do not coincide with those in another. The clever use of animal words is by no means scarce in everyday communication or literary works, which helps make English and Chinese vivid and lively in image, plain and expressive in character, and rich and strong in flavor. In this study, many animal words are collected for the analysis of the similarities and the differences between the cultural connotations carried by animal words in Chinese and English, find out the causes of differences, and then discuss some methods and techniques for translating these animal words.",
"title": ""
},
{
"docid": "neg:1840196_10",
"text": "OBJECTIVE\nThe assessment of cognitive functions of adults with attention deficit hyperactivity disorder (ADHD) comprises self-ratings of cognitive functioning (subjective assessment) as well as psychometric testing (objective neuropsychological assessment). The aim of the present study was to explore the utility of these assessment strategies in predicting neuropsychological impairments of adults with ADHD as determined by both approaches.\n\n\nMETHOD\nFifty-five adults with ADHD and 66 healthy participants were assessed with regard to cognitive functioning in several domains by employing subjective and objective measurement tools. Significance and effect sizes for differences between groups as well as the proportion of patients with impairments were analyzed. Furthermore, logistic regression analyses were carried out in order to explore the validity of subjective and objective cognitive measures in predicting cognitive impairments.\n\n\nRESULTS\nBoth subjective and objective assessment tools revealed significant cognitive dysfunctions in adults with ADHD. The majority of patients displayed considerable impairments in all cognitive domains assessed. A comparison of effect sizes, however, showed larger dysfunctions in the subjective assessment than in the objective assessment. Furthermore, logistic regression models indicated that subjective cognitive complaints could not be predicted by objective measures of cognition and vice versa.\n\n\nCONCLUSIONS\nSubjective and objective assessment tools were found to be sensitive in revealing cognitive dysfunctions of adults with ADHD. Because of the weak association between subjective and objective measurements, it was concluded that subjective and objective measurements are both important for clinical practice but may provide distinct types of information and capture different aspects of functioning.",
"title": ""
},
{
"docid": "neg:1840196_11",
"text": "In this paper, we propose a probabilistic active tactile transfer learning (ATTL) method to enable robotic systems to exploit their prior tactile knowledge while discriminating among objects via their physical properties (surface texture, stiffness, and thermal conductivity). Using the proposed method, the robot autonomously selects and exploits its most relevant prior tactile knowledge to efficiently learn about new unknown objects with a few training samples or even one. The experimental results show that using our proposed method, the robot successfully discriminated among new objects with 72% discrimination accuracy using only one training sample (on-shot-tactile-learning). Furthermore, the results demonstrate that our method is robust against transferring irrelevant prior tactile knowledge (negative tactile knowledge transfer).",
"title": ""
},
{
"docid": "neg:1840196_12",
"text": "Web 2.0 has had a tremendous impact on education. It facilitates access and availability of learning content in variety of new formats, content creation, learning tailored to students’ individual preferences, and collaboration. The range of Web 2.0 tools and features is constantly evolving, with focus on users and ways that enable users to socialize, share and work together on (user-generated) content. In this chapter we present ALEF – Adaptive Learning Framework that responds to the challenges posed on educational systems in Web 2.0 era. Besides its base functionality – to deliver educational content – ALEF particularly focuses on making the learning process more efficient by delivering tailored learning experience via personalized recommendation, and enabling learners to collaborate and actively participate in learning via interactive educational components. Our existing and successfully utilized solution serves as the medium for presenting key concepts that enable realizing Web 2.0 principles in education, namely lightweight models, and three components of framework infrastructure important for constant evolution and inclusion of students directly into the educational process – annotation framework, feedback infrastructure and widgets. These make possible to devise and implement various mechanisms for recommendation and collaboration – we also present selected methods for personalized recommendation and collaboration together with their evaluation in ALEF.",
"title": ""
},
{
"docid": "neg:1840196_13",
"text": "Classification of time series has been attracting great interest over the past decade. While dozens of techniques have been introduced, recent empirical evidence has strongly suggested that the simple nearest neighbor algorithm is very difficult to beat for most time series problems, especially for large-scale datasets. While this may be considered good news, given the simplicity of implementing the nearest neighbor algorithm, there are some negative consequences of this. First, the nearest neighbor algorithm requires storing and searching the entire dataset, resulting in a high time and space complexity that limits its applicability, especially on resource-limited sensors. Second, beyond mere classification accuracy, we often wish to gain some insight into the data and to make the classification result more explainable, which global characteristics of the nearest neighbor cannot provide. In this work we introduce a new time series primitive, time series shapelets, which addresses these limitations. Informally, shapelets are time series subsequences which are in some sense maximally representative of a class. We can use the distance to the shapelet, rather than the distance to the nearest neighbor to classify objects. As we shall show with extensive empirical evaluations in diverse domains, classification algorithms based on the time series shapelet primitives can be interpretable, more accurate, and significantly faster than state-of-the-art classifiers.",
"title": ""
},
{
"docid": "neg:1840196_14",
"text": "In the discussion about Future Internet, Software-Defined Networking (SDN), enabled by OpenFlow, is currently seen as one of the most promising paradigm. While the availability and scalability concerns rises as a single controller could be alleviated by using replicate or distributed controllers, there lacks a flexible mechanism to allow controller load balancing. This paper proposes BalanceFlow, a controller load balancing architecture for OpenFlow networks. By utilizing CONTROLLER X action extension for OpenFlow switches and cross-controller communication, one of the controllers, called “super controller”, can flexibly tune the flow-requests handled by each controller, without introducing unacceptable propagation latencies. Experiments based on real topology show that BalanceFlow can adjust the load of each controller dynamically.",
"title": ""
},
{
"docid": "neg:1840196_15",
"text": "Most of the algorithms for inverse reinforcement learning (IRL) assume that the reward function is a linear function of the pre-defined state and action features. However, it is often difficult to manually specify the set of features that can make the true reward function representable as a linear function. We propose a Bayesian nonparametric approach to identifying useful composite features for learning the reward function. The composite features are assumed to be the logical conjunctions of the predefined atomic features so that we can represent the reward function as a linear function of the composite features. We empirically show that our approach is able to learn composite features that capture important aspects of the reward function on synthetic domains, and predict taxi drivers’ behaviour with high accuracy on a real GPS trace dataset.",
"title": ""
},
{
"docid": "neg:1840196_16",
"text": "Many researchers see the potential of wireless mobile learning devices to achieve large-scale impact on learning because of portability, low cost, and communications features. This enthusiasm is shared but the lessons drawn from three well-documented uses of connected handheld devices in education lead towards challenges ahead. First, ‘wireless, mobile learning’ is an imprecise description of what it takes to connect learners and their devices together in a productive manner. Research needs to arrive at a more precise understanding of the attributes of wireless networking that meet acclaimed pedagogical requirements and desires. Second, ‘pedagogical applications’ are often led down the wrong road by complex views of technology and simplistic views of social practices. Further research is needed that tells the story of rich pedagogical practice arising out of simple wireless and mobile technologies. Third, ‘large scale’ impact depends on the extent to which a common platform, that meets the requirements of pedagogically rich applications, becomes available. At the moment ‘wireless mobile technologies for education’ are incredibly diverse and incompatible; to achieve scale, a strong vision will be needed to lead to standardisation, overcoming the tendency to marketplace fragmentation.",
"title": ""
},
{
"docid": "neg:1840196_17",
"text": "Many image-to-image translation problems are ambiguous, as a single input image may correspond to multiple possible outputs. In this work, we aim to model a distribution of possible outputs in a conditional generative modeling setting. The ambiguity of the mapping is distilled in a low-dimensional latent vector, which can be randomly sampled at test time. A generator learns to map the given input, combined with this latent code, to the output. We explicitly encourage the connection between output and the latent code to be invertible. This helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse, and produces more diverse results. We explore several variants of this approach by employing different training objectives, network architectures, and methods of injecting the latent code. Our proposed method encourages bijective consistency between the latent encoding and output modes. We present a systematic comparison of our method and other variants on both perceptual realism and diversity.",
"title": ""
},
{
"docid": "neg:1840196_18",
"text": "A number of antioxidants and trace minerals have important roles in immune function and may affect health in transition dairy cows. Vitamin E and beta-carotene are important cellular antioxidants. Selenium (Se) is involved in the antioxidant system via its role in the enzyme glutathione peroxidase. Inadequate dietary vitamin E or Se decreases neutrophil function during the perpariturient period. Supplementation of vitamin E and/or Se has reduced the incidence of mastitis and retained placenta, and reduced duration of clinical symptoms of mastitis in some experiments. Research has indicated that beta-carotene supplementation may enhance immunity and reduce the incidence of retained placenta and metritis in dairy cows. Marginal copper deficiency resulted in reduced neutrophil killing and decreased interferon production by mononuclear cells. Copper supplementation of a diet marginal in copper reduced the peak clinical response during experimental Escherichia coli mastitis. Limited research indicated that chromium supplementation during the transition period may increase immunity and reduce the incidence of retained placenta.",
"title": ""
}
] |
1840197 | A Multimodal Deep Learning Network for Group Activity Recognition | [
{
"docid": "pos:1840197_0",
"text": "To effectively collaborate with people, robots are expected to detect and profile the users they are interacting with, but also to modify and adapt their behavior according to the learned models. The goal of this survey is to focus on the perspective of user profiling and behavioral adaptation. On the one hand, human-robot interaction requires a human-oriented perception to model and recognize the human actions and capabilities, the intentions and goals behind such actions, and the parameters characterizing the social interaction. On the other hand, the robot behavior should be adapted in its physical movement within the space, in the actions to be selected to achieve collaboration, and by modulating the parameters characterizing the interaction. In this direction, this survey of the current literature introduces a general classification scheme for both the profiling and the behavioral adaptation research topics in terms of physical, cognitive, and social interaction viewpoints. c © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "pos:1840197_1",
"text": "Every moment counts in action recognition. A comprehensive understanding of human activity in video requires labeling every frame according to the actions occurring, placing multiple labels densely over a video sequence. To study this problem we extend the existing THUMOS dataset and introduce MultiTHUMOS, a new dataset of dense labels over unconstrained internet videos. Modeling multiple, dense labels benefits from temporal relations within and across classes. We define a novel variant of long short-term memory deep networks for modeling these temporal relations via multiple input and output connections. We show that this model improves action labeling accuracy and further enables deeper understanding tasks ranging from structured retrieval to action prediction.",
"title": ""
},
{
"docid": "pos:1840197_2",
"text": "The automatic assessment of the level of independence of a person, based on the recognition of a set of Activities of Daily Living, is among the most challenging research fields in Ambient Intelligence. The article proposes a framework for the recognition of motion primitives, relying on Gaussian Mixture Modeling and Gaussian Mixture Regression for the creation of activity models. A recognition procedure based on Dynamic Time Warping and Mahalanobis distance is found to: (i) ensure good classification results; (ii) exploit the properties of GMM and GMR modeling to allow for an easy run-time recognition; (iii) enhance the consistency of the recognition via the use of a classifier allowing unknown as an answer.",
"title": ""
}
] | [
{
"docid": "neg:1840197_0",
"text": "Today, there is a big veriety of different approaches and algorithms of data filtering and recommendations giving. In this paper we describe traditional approaches and explane what kind of modern approaches have been developed lately. All the paper long we will try to explane approaches and their problems based on a movies recommendations. In the end we will show the main challanges recommender systems come across.",
"title": ""
},
{
"docid": "neg:1840197_1",
"text": "Verbal redundancy arises from the concurrent presentation of text and verbatim speech. To inform theories of multimedia learning that guide the design of educational materials, a meta-analysis was conducted to investigate the effects of spoken-only, written-only, and spoken–written presentations on learning retention and transfer. After an extensive search for experimental studies meeting specified inclusion criteria, data from 57 independent studies were extracted. Most of the research participants were postsecondary students. Overall, this meta-analysis revealed that outcomes comparing spoken–written and written-only presentations did not differ, but students who learned from spoken–written presentations outperformed those who learned from spoken-only presentations. This effect was dependent on learners’ prior knowledge, pacing of presentation, and inclusion of animation or diagrams. Specifically, the advantages of spoken–written presentations over spoken-only presentations were found for low prior knowledge learners, system-paced learning materials, and picture-free materials. In comparison with verbatim, spoken–written presentations, presentations displaying key terms extracted from spoken narrations were associated with better learning outcomes and accounted for much of the advantage of spoken–written over spoken-only presentations. These findings have significant implications for the design of multimedia materials.",
"title": ""
},
{
"docid": "neg:1840197_2",
"text": "Three different samples (total N = 485) participated in the development and refinement ofthe Leadership Scale for Sports (LSS). A five-factor solution with 40 items describing the most salient dimensions of coaching behavior was selected as the most meaningful. These factors were named Training and Instruction, Democratic Behavior, Autocratic Behavior, Social Support, and Positive Feedback. Internal consistency estimates ranged from .45 to .93 and the test-retest reliability coefficients ranged from .71 to .82. The relative stability of the factor structure across the different samples confirmed the factorial validity ofthe scale. The interpretation ofthe factors established the content validity of the scale. Finally, possible uses of the LSS were pointed out.",
"title": ""
},
{
"docid": "neg:1840197_3",
"text": "Depression is associated with significant disability, mortality and healthcare costs. It is the third leading cause of disability in high-income countries, 1 and affects approximately 840 million people worldwide. 2 Although biological, psychological and environmental theories have been advanced, 3 the underlying pathophysiology of depression remains unknown and it is probable that several different mechanisms are involved. Vitamin D is a unique neurosteroid hormone that may have an important role in the development of depression. Receptors for vitamin D are present on neurons and glia in many areas of the brain including the cingulate cortex and hippocampus, which have been implicated in the pathophysiology of depression. 4 Vitamin D is involved in numerous brain processes including neuroimmuno-modulation, regulation of neurotrophic factors, neuroprotection, neuroplasticity and brain development, 5 making it biologically plausible that this vitamin might be associated with depression and that its supplementation might play an important part in the treatment of depression. Over two-thirds of the populations of the USA and Canada have suboptimal levels of vitamin D. 6,7 Some studies have demonstrated a strong relationship between vitamin D and depression, 8,9 whereas others have shown no relationship. 10,11 To date there have been eight narrative reviews on this topic, 12–19 with the majority of reviews reporting that there is insufficient evidence for an association between vitamin D and depression. None of these reviews used a comprehensive search strategy, provided inclusion or exclusion criteria, assessed risk of bias or combined study findings. In addition, several recent studies were not included in these reviews. 9,10,20,21 Therefore, we undertook a systematic review and meta-analysis to investigate whether vitamin D deficiency is associated with depression in adults in case–control and cross-sectional studies; whether vitamin D deficiency increases the risk of developing depression in cohort studies in adults; and whether vitamin D supplementation improves depressive symptoms in adults with depression compared with placebo, or prevents depression compared with placebo, in healthy adults in randomised controlled trials (RCTs). We searched the databases MEDLINE, EMBASE, PsycINFO, CINAHL, AMED and Cochrane CENTRAL (up to 2 February 2011) using separate comprehensive strategies developed in consultation with an experienced research librarian (see online supplement DS1). A separate search of PubMed identified articles published electronically prior to print publication within 6 months of our search and therefore not available through MEDLINE. The clinical trials registries clinicaltrials.gov and Current Controlled Trials (controlled-trials.com) were searched for unpublished data. The reference lists …",
"title": ""
},
{
"docid": "neg:1840197_4",
"text": "The new era of the Internet of Things is driving the evolution of conventional Vehicle Ad-hoc Networks into the Internet of Vehicles (IoV). With the rapid development of computation and communication technologies, IoV promises huge commercial interest and research value, thereby attracting a large number of companies and researchers. This paper proposes an abstract network model of the IoV, discusses the technologies required to create the IoV, presents different applications based on certain currently existing technologies, provides several open research challenges and describes essential future research in the area of IoV.",
"title": ""
},
{
"docid": "neg:1840197_5",
"text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.",
"title": ""
},
{
"docid": "neg:1840197_6",
"text": "The paper contributes to the emerging literature linking sustainability as a concept to problems researched in HRM literature. Sustainability is often equated with social responsibility. However, emphasizing mainly moral or ethical values neglects that sustainability can also be economically rational. This conceptual paper discusses how the notion of sustainability has developed and emerged in HRM literature. A typology of sustainability concepts in HRM is presented to advance theorizing in the field of Sustainable HRM. The concepts of paradox, duality, and dilemma are reviewed to contribute to understanding the emergence of sustainability in HRM. It is argued in this paper that sustainability can be applied as a concept to cope with the tensions of shortvs. long-term HRM and to make sense of paradoxes, dualities, and dilemmas. Furthermore, it is emphasized that the dualities cannot be reconciled when sustainability is interpreted in a way that leads to ignorance of one of the values or logics. Implications for further research and modest suggestions for managerial practice are derived.",
"title": ""
},
{
"docid": "neg:1840197_7",
"text": "Characteristics of physical activity are indicative of one's mobility level, latent chronic diseases and aging process. Accelerometers have been widely accepted as useful and practical sensors for wearable devices to measure and assess physical activity. This paper reviews the development of wearable accelerometry-based motion detectors. The principle of accelerometry measurement, sensor properties and sensor placements are first introduced. Various research using accelerometry-based wearable motion detectors for physical activity monitoring and assessment, including posture and movement classification, estimation of energy expenditure, fall detection and balance control evaluation, are also reviewed. Finally this paper reviews and compares existing commercial products to provide a comprehensive outlook of current development status and possible emerging technologies.",
"title": ""
},
{
"docid": "neg:1840197_8",
"text": "In this paper, we review research and applications in the area of mediated or remote social touch. Whereas current communication media rely predominately on vision and hearing, mediated social touch allows people to touch each other over a distance by means of haptic feedback technology. Overall, the reviewed applications have interesting potential, such as the communication of simple ideas (e.g., through Hapticons), establishing a feeling of connectedness between distant lovers, or the recovery from stress. However, the beneficial effects of mediated social touch are usually only assumed and have not yet been submitted to empirical scrutiny. Based on social psychological literature on touch, communication, and the effects of media, we assess the current research and design efforts and propose future directions for the field of mediated social touch.",
"title": ""
},
{
"docid": "neg:1840197_9",
"text": "What makes a good recommendation or good list of recommendations?\n Research into recommender systems has traditionally focused on accuracy, in particular how closely the recommender’s predicted ratings are to the users’ true ratings. However, it has been recognized that other recommendation qualities—such as whether the list of recommendations is diverse and whether it contains novel items—may have a significant impact on the overall quality of a recommender system. Consequently, in recent years, the focus of recommender systems research has shifted to include a wider range of “beyond accuracy” objectives.\n In this article, we present a survey of the most discussed beyond-accuracy objectives in recommender systems research: diversity, serendipity, novelty, and coverage. We review the definitions of these objectives and corresponding metrics found in the literature. We also review works that propose optimization strategies for these beyond-accuracy objectives. Since the majority of works focus on one specific objective, we find that it is not clear how the different objectives relate to each other.\n Hence, we conduct a set of offline experiments aimed at comparing the performance of different optimization approaches with a view to seeing how they affect objectives other than the ones they are optimizing. We use a set of state-of-the-art recommendation algorithms optimized for recall along with a number of reranking strategies for optimizing the diversity, novelty, and serendipity of the generated recommendations. For each reranking strategy, we measure the effects on the other beyond-accuracy objectives and demonstrate important insights into the correlations between the discussed objectives. For instance, we find that rating-based diversity is positively correlated with novelty, and we demonstrate the positive influence of novelty on recommendation coverage.",
"title": ""
},
{
"docid": "neg:1840197_10",
"text": "Current avionics architectures implemented on large aircraft use complex processors, which are shared by many avionics applications according Integrated Modular Avionics (IMA) concepts. Using less complex processors on smaller aircraft such as helicopters leads to a distributed IMA architecture. Allocation of the avionics applications on a distributed architecture has to deal with two main challenges. A first problem is about the feasibility of a static allocation of partitions on each processing element. The second problem is the worst-case end-to-end communication delay analysis: due to the scheduling of partitions on processing elements which are not synchronized, some allocation schemes are not valid. This paper first presents a mapping algorithm using an integrated approach taking into account these two issues. In a second step, we evaluate, on a realistic helicopter case study, the feasibility of mapping a given application on a variable number of processing elements. Finally, we present a scalability analysis of the proposed mapping algorithm.",
"title": ""
},
{
"docid": "neg:1840197_11",
"text": "Video description is the automatic generation of natural language sentences that describe the contents of a given video. It has applications in human-robot interaction, helping the visually impaired and video subtitling. The past few years have seen a surge of research in this area due to the unprecedented success of deep learning in computer vision and natural language processing. Numerous methods, datasets and evaluation metrics have been proposed in the literature, calling the need for a comprehensive survey to focus research efforts in this flourishing new direction. This paper fills the gap by surveying the state of the art approaches with a focus on deep learning models; comparing benchmark datasets in terms of their domains, number of classes, and repository size; and identifying the pros and cons of various evaluation metrics like SPICE, CIDEr, ROUGE, BLEU, METEOR, and WMD. Classical video description approaches combined subject, object and verb detection with template based language models to generate sentences. However, the release of large datasets revealed that these methods can not cope with the diversity in unconstrained open domain videos. Classical approaches were followed by a very short era of statistical methods which were soon replaced with deep learning, the current state of the art in video description. Our survey shows that despite the fast-paced developments, video description research is still in its infancy due to the following reasons. Analysis of video description models is challenging because it is difficult to ascertain the contributions, towards accuracy or errors, of the visual features and the adopted language model in the final description. Existing datasets neither contain adequate visual diversity nor complexity of linguistic structures. Finally, current evaluation metrics fall short of measuring the agreement between machine generated descriptions with that of humans. We conclude our survey by listing promising future research directions.",
"title": ""
},
{
"docid": "neg:1840197_12",
"text": "Fractional differential equations have recently been applied in various area of engineering, science, finance, applied mathematics, bio-engineering and others. However, many researchers remain unaware of this field. In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The method is based upon Legendre approximations. The properties of Legendre polynomials are utilized to reduce FDDEs to linear or nonlinear system of algebraic equations. Numerical simulation with the exact solutions of FDDEs is presented. AMS Subject Classification: 34A08, 34K37",
"title": ""
},
{
"docid": "neg:1840197_13",
"text": "Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices.",
"title": ""
},
{
"docid": "neg:1840197_14",
"text": "The usability of a software product has recently become a key software quality factor. The International Organization for Standardization (ISO) has developed a variety of models to specify and measure software usability but these individual models do not support all usability aspects. Furthermore, they are not yet well integrated into current software engineering practices and lack tool support. The aim of this research is to survey the actual representation (meanings and interpretations) of usability in ISO standards, indicate some of existing limitations and address them by proposing an enhanced, normative model for the evaluation of software usability.",
"title": ""
},
{
"docid": "neg:1840197_15",
"text": "In the hierarchy of data, information and knowledge, computational methods play a major role in the initial processing of data to extract information, but they alone become less effective to compile knowledge from information. The Kyoto Encyclopedia of Genes and Genomes (KEGG) resource (http://www.kegg.jp/ or http://www.genome.jp/kegg/) has been developed as a reference knowledge base to assist this latter process. In particular, the KEGG pathway maps are widely used for biological interpretation of genome sequences and other high-throughput data. The link from genomes to pathways is made through the KEGG Orthology system, a collection of manually defined ortholog groups identified by K numbers. To better automate this interpretation process the KEGG modules defined by Boolean expressions of K numbers have been expanded and improved. Once genes in a genome are annotated with K numbers, the KEGG modules can be computationally evaluated revealing metabolic capacities and other phenotypic features. The reaction modules, which represent chemical units of reactions, have been used to analyze design principles of metabolic networks and also to improve the definition of K numbers and associated annotations. For translational bioinformatics, the KEGG MEDICUS resource has been developed by integrating drug labels (package inserts) used in society.",
"title": ""
},
{
"docid": "neg:1840197_16",
"text": "In this paper, we have conducted a literature review on the recent developments and publications involving the vehicle routing problem and its variants, namely vehicle routing problem with time windows (VRPTW) and the capacitated vehicle routing problem (CVRP) and also their variants. The VRP is classified as an NP-hard problem. Hence, the use of exact optimization methods may be difficult to solve these problems in acceptable CPU times, when the problem involves real-world data sets that are very large. The vehicle routing problem comes under combinatorial problem. Hence, to get solutions in determining routes which are realistic and very close to the optimal solution, we use heuristics and meta-heuristics. In this paper we discuss the various exact methods and the heuristics and meta-heuristics used to solve the VRP and its variants.",
"title": ""
},
{
"docid": "neg:1840197_17",
"text": "This paper presents the development and experimental performance of a 10 kW high power density three-phase ac-dc-ac converter. The converter consists of a Vienna-type rectifier front-end and a two-level voltage source inverter (VSI). In order to reduce the switching loss and achieve a high operating junction temperature, a SiC JFET and SiC Schottky diode are utilized. Design considerations for the phase-leg units, gate drivers, integrated input filter — combining EMI and boost inductor stages — and the system protection are described in full detail. Experiments are carried out under different operating conditions, and the results obtained verify the performance and feasibility of the proposed converter system.",
"title": ""
},
{
"docid": "neg:1840197_18",
"text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.",
"title": ""
},
{
"docid": "neg:1840197_19",
"text": "The amount of research related to Internet marketing has grown rapidly since the dawn of the Internet Age. A review of the literature base will help identify the topics that have been explored as well as identify topics for further research. This research project collects, synthesizes, and analyses both the research strategies (i.e., methodologies) and content (e.g., topics, focus, categories) of the current literature, and then discusses an agenda for future research efforts. We analyzed 411 articles published over the past eighteen years (1994-present) in thirty top Information Systems (IS) journals and 22 articles in the top 5 Marketing journals. The results indicate an increasing level of activity during the 18-year period, a biased distribution of Internet marketing articles focused on exploratory methodologies, and several research strategies that were either underrepresented or absent from the pool of Internet marketing research. We also identified several subject areas that need further exploration. The compilation of the methodologies used and Internet marketing topics being studied can serve to motivate researchers to strengthen current research and explore new areas of this research.",
"title": ""
}
] |
1840198 | Rumor Identification and Belief Investigation on Twitter | [
{
"docid": "pos:1840198_0",
"text": "The problem of gauging information credibility on social networks has received considerable attention in recent years. Most previous work has chosen Twitter, the world's largest micro-blogging platform, as the premise of research. In this work, we shift the premise and study the problem of information credibility on Sina Weibo, China's leading micro-blogging service provider. With eight times more users than Twitter, Sina Weibo is more of a Facebook-Twitter hybrid than a pure Twitter clone, and exhibits several important characteristics that distinguish it from Twitter. We collect an extensive set of microblogs which have been confirmed to be false rumors based on information from the official rumor-busting service provided by Sina Weibo. Unlike previous studies on Twitter where the labeling of rumors is done manually by the participants of the experiments, the official nature of this service ensures the high quality of the dataset. We then examine an extensive set of features that can be extracted from the microblogs, and train a classifier to automatically detect the rumors from a mixed set of true information and false information. The experiments show that some of the new features we propose are indeed effective in the classification, and even the features considered in previous studies have different implications with Sina Weibo than with Twitter. To the best of our knowledge, this is the first study on rumor analysis and detection on Sina Weibo.",
"title": ""
},
{
"docid": "pos:1840198_1",
"text": "Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.",
"title": ""
}
] | [
{
"docid": "neg:1840198_0",
"text": "Text ambiguity is one of the most interesting phenomenon in human communication and a difficult problem in Natural Language Processing (NLP). Identification of text ambiguities is an important task for evaluating the quality of text and uncovering its vulnerable points. There exist several types of ambiguity. In the present work we review and compare different approaches to ambiguity identification task. We also propose our own approach to this problem. Moreover, we present the prototype of a tool for ambiguity identification and measurement in natural language text. The tool is intended to support the process of writing high quality documents.",
"title": ""
},
{
"docid": "neg:1840198_1",
"text": "Modeling the evolution of topics with time is of great value in automatic summarization and analysis of large document collections. In this work, we propose a new probabilistic graphical model to address this issue. The new model, which we call the Multiscale Topic Tomography Model (MTTM), employs non-homogeneous Poisson processes to model generation of word-counts. The evolution of topics is modeled through a multi-scale analysis using Haar wavelets. One of the new features of the model is its modeling the evolution of topics at various time-scales of resolution, allowing the user to zoom in and out of the time-scales. Our experiments on Science data using the new model uncovers some interesting patterns in topics. The new model is also comparable to LDA in predicting unseen data as demonstrated by our perplexity experiments.",
"title": ""
},
{
"docid": "neg:1840198_2",
"text": "As these paired Commentaries discuss, neuroscientists and architects are just beginning to collaborate, each bringing what they know about their respective fields to the task of improving the environment of research buildings and laboratories.",
"title": ""
},
{
"docid": "neg:1840198_3",
"text": "Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.",
"title": ""
},
{
"docid": "neg:1840198_4",
"text": "Toxicity in online environments is a complex and a systemic issue. Esports communities seem to be particularly suffering from toxic behaviors. Especially in competitive esports games, negative behavior, such as harassment, can create barriers to players achieving high performance and can reduce players' enjoyment which may cause them to leave the game. The aim of this study is to review design approaches in six major esports games to deal with toxic behaviors and to investigate how players perceive and deal with toxicity in those games. Our preliminary findings from an interview study with 17 participants (3 female) from a university esports club show that players define toxicity as behaviors disrupt their morale and team dynamics, and participants are inclined to normalize negative behaviors and rationalize it as part of the competitive game culture. If they choose to take an action against toxic players, they are likely to ostracize toxic players.",
"title": ""
},
{
"docid": "neg:1840198_5",
"text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.",
"title": ""
},
{
"docid": "neg:1840198_6",
"text": "We propose a modular reinforcement learning algorithm which decomposes a Markov decision process into independent modules. Each module is trained using Sarsa(λ). We introduce three algorithms for forming global policy from modules policies, and demonstrate our results using a 2D grid world.",
"title": ""
},
{
"docid": "neg:1840198_7",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "neg:1840198_8",
"text": "Making deep convolutional neural networks more accurate typically comes at the cost of increased computational and memory resources. In this paper, we reduce this cost by exploiting the fact that the importance of features computed by convolutional layers is highly input-dependent, and propose feature boosting and suppression (FBS), a new method to predictively amplify salient convolutional channels and skip unimportant ones at run-time. FBS introduces small auxiliary connections to existing convolutional layers. In contrast to channel pruning methods which permanently remove channels, it preserves the full network structures and accelerates convolution by dynamically skipping unimportant input and output channels. FBS-augmented networks are trained with conventional stochastic gradient descent, making it readily available for many state-of-the-art CNNs. We compare FBS to a range of existing channel pruning and dynamic execution schemes and demonstrate large improvements on ImageNet classification. Experiments show that FBS can respectively provide 5× and 2× savings in compute on VGG-16 and ResNet-18, both with less than 0.6% top-5 accuracy loss.",
"title": ""
},
{
"docid": "neg:1840198_9",
"text": "What do we see when we glance at a natural scene and how does it change as the glance becomes longer? We asked naive subjects to report in a free-form format what they saw when looking at briefly presented real-life photographs. Our subjects received no specific information as to the content of each stimulus. Thus, our paradigm differs from previous studies where subjects were cued before a picture was presented and/or were probed with multiple-choice questions. In the first stage, 90 novel grayscale photographs were foveally shown to a group of 22 native-English-speaking subjects. The presentation time was chosen at random from a set of seven possible times (from 27 to 500 ms). A perceptual mask followed each photograph immediately. After each presentation, subjects reported what they had just seen as completely and truthfully as possible. In the second stage, another group of naive individuals was instructed to score each of the descriptions produced by the subjects in the first stage. Individual scores were assigned to more than a hundred different attributes. We show that within a single glance, much object- and scene-level information is perceived by human subjects. The richness of our perception, though, seems asymmetrical. Subjects tend to have a propensity toward perceiving natural scenes as being outdoor rather than indoor. The reporting of sensory- or feature-level information of a scene (such as shading and shape) consistently precedes the reporting of the semantic-level information. But once subjects recognize more semantic-level components of a scene, there is little evidence suggesting any bias toward either scene-level or object-level recognition.",
"title": ""
},
{
"docid": "neg:1840198_10",
"text": "Accurate performance evaluation of cloud computing resources is a necessary prerequisite for ensuring that quality of service parameters remain within agreed limits. In this paper, we employ both the analytical and simulation modeling to addresses the complexity of cloud computing systems. Analytical model is comprised of distinct functional submodels, the results of which are combined in an iterative manner to obtain the solution with required accuracy. Our models incorporate the important features of cloud centers such as batch arrival of user requests, resource virtualization, and realistic servicing steps, to obtain important performance metrics such as task blocking probability and total waiting time incurred on user requests. Also, our results reveal important insights for capacity planning to control delay of servicing users requests.",
"title": ""
},
{
"docid": "neg:1840198_11",
"text": "To address the need for fundamental universally valid definitions of exact bandwidth and quality factor (Q) of tuned antennas, as well as the need for efficient accurate approximate formulas for computing this bandwidth and Q, exact and approximate expressions are found for the bandwidth and Q of a general single-feed (one-port) lossy or lossless linear antenna tuned to resonance or antiresonance. The approximate expression derived for the exact bandwidth of a tuned antenna differs from previous approximate expressions in that it is inversely proportional to the magnitude |Z'/sub 0/(/spl omega//sub 0/)| of the frequency derivative of the input impedance and, for not too large a bandwidth, it is nearly equal to the exact bandwidth of the tuned antenna at every frequency /spl omega//sub 0/, that is, throughout antiresonant as well as resonant frequency bands. It is also shown that an appropriately defined exact Q of a tuned lossy or lossless antenna is approximately proportional to |Z'/sub 0/(/spl omega//sub 0/)| and thus this Q is approximately inversely proportional to the bandwidth (for not too large a bandwidth) of a simply tuned antenna at all frequencies. The exact Q of a tuned antenna is defined in terms of average internal energies that emerge naturally from Maxwell's equations applied to the tuned antenna. These internal energies, which are similar but not identical to previously defined quality-factor energies, and the associated Q are proven to increase without bound as the size of an antenna is decreased. Numerical solutions to thin straight-wire and wire-loop lossy and lossless antennas, as well as to a Yagi antenna and a straight-wire antenna embedded in a lossy dispersive dielectric, confirm the accuracy of the approximate expressions and the inverse relationship between the defined bandwidth and the defined Q over frequency ranges that cover several resonant and antiresonant frequency bands.",
"title": ""
},
{
"docid": "neg:1840198_12",
"text": "We can leverage data and complex systems science to better understand society and human nature on a population scale through language — utilizing tools that include sentiment analysis, machine learning, and data visualization. Data-driven science and the sociotechnical systems that we use every day are enabling a transformation from hypothesis-driven, reductionist methodology to complex systems sciences. Namely, the emergence and global adoption of social media has rendered possible the real-time estimation of population-scale sentiment, with profound implications for our understanding of human behavior. Advances in computing power, natural language processing, and digitization of text now make it possible to study a culture’s evolution through its texts using a “big data” lens. Given the growing assortment of sentiment measuring instruments, it is imperative to understand which aspects of sentiment dictionaries contribute to both their classification accuracy and their ability to provide richer understanding of texts. Here, we perform detailed, quantitative tests and qualitative assessments of 6 dictionary-based methods applied to 4 different corpora, and briefly examine a further 20 methods. We show that while inappropriate for sentences, dictionary-based methods are generally robust in their classification accuracy for longer texts. Most importantly they can aid understanding of texts with reliable and meaningful word shift graphs if (1) the dictionary covers a sufficiently large enough portion of a given text’s lexicon when weighted by word usage frequency; and (2) words are scored on a continuous scale. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. By classifying the emotional arcs for a filtered subset of 4,803 stories from Project Gutenberg’s fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads. Within stories lie the core values of social behavior, rich with both strategies and proper protocol, which we can begin to study more broadly and systematically as a true reflection of culture. Of profound scientific interest will be the degree to which we can eventually understand the full landscape of human stories, and data driven approaches will play a crucial role. Finally, we utilize web-scale data from Twitter to study the limits of what social data can tell us about public health, mental illness, discourse around the protest movement of #BlackLivesMatter, discourse around climate change, and hidden networks. We conclude with a review of published works in complex systems that separately analyze charitable donations, the happiness of words in 10 languages, 100 years of daily temperature data across the United States, and Australian Rules Football games.",
"title": ""
},
{
"docid": "neg:1840198_13",
"text": "Effective processing of extremely large volumes of spatial data has led to many organizations employing distributed processing frameworks. Hadoop is one such open-source framework that is enjoying widespread adoption. In this paper, we detail an approach to indexing and performing key analytics on spatial data that is persisted in HDFS. Our technique differs from other approaches in that it combines spatial indexing, data load balancing, and data clustering in order to optimize performance across the cluster. In addition, our index supports efficient, random-access queries without requiring a MapReduce job; neither a full table scan, nor any MapReduce overhead is incurred when searching. This facilitates large numbers of concurrent query executions. We will also demonstrate how indexing and clustering positively impacts the performance of range and k-NN queries on large real-world datasets. The performance analysis will enable a number of interesting observations to be made on the behavior of spatial indexes and spatial queries in this distributed processing environment.",
"title": ""
},
{
"docid": "neg:1840198_14",
"text": "Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and powerhungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.",
"title": ""
},
{
"docid": "neg:1840198_15",
"text": "This paper presents design and implementation of scalar control of induction motor. This method leads to be able to adjust the speed of the motor by control the frequency and amplitude of the stator voltage of induction motor, the ratio of stator voltage to frequency should be kept constant, which is called as V/F or scalar control of induction motor drive. This paper presents a comparative study of open loop and close loop V/F control induction motor. The V/F",
"title": ""
},
{
"docid": "neg:1840198_16",
"text": "This tutorial focuses on the sense of touch within the context of a fully active human observer. It is intended for graduate students and researchers outside the discipline who seek an introduction to the rapidly evolving field of human haptics. The tutorial begins with a review of peripheral sensory receptors in skin, muscles, tendons, and joints. We then describe an extensive body of research on \"what\" and \"where\" channels, the former dealing with haptic perception of objects, surfaces, and their properties, and the latter with perception of spatial layout on the skin and in external space relative to the perceiver. We conclude with a brief discussion of other significant issues in the field, including vision-touch interactions, affective touch, neural plasticity, and applications.",
"title": ""
},
{
"docid": "neg:1840198_17",
"text": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data-center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks that proves robust to the unbalanced and non-IID data distributions that naturally arise. This method allows high-quality models to be trained in relatively few rounds of communication, the principal constraint for federated learning. The key insight is that despite the non-convex loss functions we optimize, parameter averaging over updates from multiple clients produces surprisingly good results, for example decreasing the communication needed to train an LSTM language model by two orders of magnitude.",
"title": ""
},
{
"docid": "neg:1840198_18",
"text": "The Joint Conference on Lexical and Computational Semantics (*SEM) each year hosts a shared task on semantic related topics. In its first edition held in 2012, the shared task was dedicated to resolving the scope and focus of negation. This paper presents the specifications, datasets and evaluation criteria of the task. An overview of participating systems is provided and their results are summarized.",
"title": ""
},
{
"docid": "neg:1840198_19",
"text": "Direct volume rendering (DVR) is of increasing diagnostic value in the analysis of data sets captured using the latest medical imaging modalities. The deployment of DVR in everyday clinical work, however, has so far been limited. One contributing factor is that current transfer function (TF) models can encode only a small fraction of the user's domain knowledge. In this paper, we use histograms of local neighborhoods to capture tissue characteristics. This allows domain knowledge on spatial relations in the data set to be integrated into the TF. As a first example, we introduce partial range histograms in an automatic tissue detection scheme and present its effectiveness in a clinical evaluation. We then use local histogram analysis to perform a classification where the tissue-type certainty is treated as a second TF dimension. The result is an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF",
"title": ""
}
] |
1840199 | Fake News Detection Enhancement with Data Imputation | [
{
"docid": "pos:1840199_0",
"text": "How fake news goes viral via social media? How does its propagation pattern differ from real stories? In this paper, we attempt to address the problem of identifying rumors, i.e., fake information, out of microblog posts based on their propagation structure. We firstly model microblog posts diffusion with propagation trees, which provide valuable clues on how an original message is transmitted and developed over time. We then propose a kernel-based method called Propagation Tree Kernel, which captures high-order patterns differentiating different types of rumors by evaluating the similarities between their propagation tree structures. Experimental results on two real-world datasets demonstrate that the proposed kernel-based approach can detect rumors more quickly and accurately than state-ofthe-art rumor detection models.",
"title": ""
},
{
"docid": "pos:1840199_1",
"text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.",
"title": ""
}
] | [
{
"docid": "neg:1840199_0",
"text": "Post-task ratings of difficulty in a usability test have the potential to provide diagnostic information and be an additional measure of user satisfaction. But the ratings need to be reliable as well as easy to use for both respondents and researchers. Three one-question rating types were compared in a study with 26 participants who attempted the same five tasks with two software applications. The types were a Likert scale, a Usability Magnitude Estimation (UME) judgment, and a Subjective Mental Effort Question (SMEQ). All three types could distinguish between the applications with 26 participants, but the Likert and SMEQ types were more sensitive with small sample sizes. Both the Likert and SMEQ types were easy to learn and quick to execute. The online version of the SMEQ question was highly correlated with other measures and had equal sensitivity to the Likert question type.",
"title": ""
},
{
"docid": "neg:1840199_1",
"text": "Given a pair of handwritten documents written by different individuals, compute a document similarity score irrespective of (i) handwritten styles, (ii) word forms, word ordering and word overflow. • IIIT-HWS: Introducing a large scale synthetic corpus of handwritten word images for enabling deep architectures. • HWNet: A deep CNN architecture for state of the art handwritten word spotting in multi-writer scenarios. • MODS: Measure of document similarity score irrespective of word forms, ordering and paraphrasing of the content. • Applications in Educational Scenario: Comparing handwritten assignments, searching through instructional videos. 2. Contributions 3. Challenges",
"title": ""
},
{
"docid": "neg:1840199_2",
"text": "In this paper, a multi-band antenna for 4G wireless systems is proposed. The proposed antenna consists of a modified planar inverted-F antenna with additional branch line for wide bandwidth and a folded monopole antenna. The antenna provides wide bandwidth for covering the hepta-band LTE/GSM/UMTS operation. The measured 6-dB return loss bandwidth was 169 MHz (793 MHz-962 MHz) at the low frequency band and 1030 MHz (1700 MHz-2730 MHz) at the high frequency band. The overall dimension of the proposed antenna is 55 mm × 110 mm × 5 mm.",
"title": ""
},
{
"docid": "neg:1840199_3",
"text": "Privacy protection is a crucial problem in many biomedical signal processing applications. For this reason, particular attention has been given to the use of secure multiparty computation techniques for processing biomedical signals, whereby nontrusted parties are able to manipulate the signals although they are encrypted. This paper focuses on the development of a privacy preserving automatic diagnosis system whereby a remote server classifies a biomedical signal provided by the client without getting any information about the signal itself and the final result of the classification. Specifically, we present and compare two methods for the secure classification of electrocardiogram (ECG) signals: the former based on linear branching programs (a particular kind of decision tree) and the latter relying on neural networks. The paper deals with all the requirements and difficulties related to working with data that must stay encrypted during all the computation steps, including the necessity of working with fixed point arithmetic with no truncation while guaranteeing the same performance of a floating point implementation in the plain domain. A highly efficient version of the underlying cryptographic primitives is used, ensuring a good efficiency of the two proposed methods, from both a communication and computational complexity perspectives. The proposed systems prove that carrying out complex tasks like ECG classification in the encrypted domain efficiently is indeed possible in the semihonest model, paving the way to interesting future applications wherein privacy of signal owners is protected by applying high security standards.",
"title": ""
},
{
"docid": "neg:1840199_4",
"text": "In this paper, we propose a simple yet effective method for multiple music source separation using convolutional neural networks. Stacked hourglass network, which was originally designed for human pose estimation in natural images, is applied to a music source separation task. The network learns features from a spectrogram image across multiple scales and generates masks for each music source. The estimated mask is refined as it passes over stacked hourglass modules. The proposed framework is able to separate multiple music sources using a single network. Experimental results on MIR-1K and DSD100 datasets validate that the proposed method achieves competitive results comparable to the state-of-the-art methods in multiple music source separation and singing voice separation tasks.",
"title": ""
},
{
"docid": "neg:1840199_5",
"text": "Folk medicine suggests that pomegranate (peels, seeds and leaves) has anti-inflammatory properties; however, the precise mechanisms by which this plant affects the inflammatory process remain unclear. Herein, we analyzed the anti-inflammatory properties of a hydroalcoholic extract prepared from pomegranate leaves using a rat model of lipopolysaccharide-induced acute peritonitis. Male Wistar rats were treated with either the hydroalcoholic extract, sodium diclofenac, or saline, and 1 h later received an intraperitoneal injection of lipopolysaccharides. Saline-injected animals (i. p.) were used as controls. Animals were culled 4 h after peritonitis induction, and peritoneal lavage and peripheral blood samples were collected. Serum and peritoneal lavage levels of TNF-α as well as TNF-α mRNA expression in peritoneal lavage leukocytes were quantified. Total and differential leukocyte populations were analyzed in peritoneal lavage samples. Lipopolysaccharide-induced increases of both TNF-α mRNA and protein levels were diminished by treatment with either pomegranate leaf hydroalcoholic extract (57 % and 48 % mean reduction, respectively) or sodium diclofenac (41 % and 33 % reduction, respectively). Additionally, the numbers of peritoneal leukocytes, especially neutrophils, were markedly reduced in hydroalcoholic extract-treated rats with acute peritonitis. These results demonstrate that pomegranate leaf extract may be used as an anti-inflammatory drug which suppresses the levels of TNF-α in acute inflammation.",
"title": ""
},
{
"docid": "neg:1840199_6",
"text": "The Caltech Multi-Vehicle Wireless Testbed (MVWT) is a platform designed to explore theoretical advances in multi-vehicle coordination and control, networked control systems and high confidence distributed computation. The contribution of this report is to present simulation and experimental results on the generation and implementation of optimal trajectories for the MVWT vehicles. The vehicles are nonlinear, spatially constrained and their input controls are bounded. The trajectories are generated using the NTG software package developed at Caltech. Minimum time trajectories and the application of Model Predictive Control (MPC) are investigated.",
"title": ""
},
{
"docid": "neg:1840199_7",
"text": "The notion of “semiotic scaffolding”, introduced into the semiotic discussions by Jesper Hoffmeyer in December of 2000, is proving to be one of the single most important concepts for the development of semiotics as we seek to understand the full extent of semiosis and the dependence of evolution, particularly in the living world, thereon. I say “particularly in the living world”, because there has been from the first a stubborn resistance among semioticians to seeing how a semiosis prior to and/or independent of living beings is possible. Yet the universe began in a state not only lifeless but incapable of supporting life, and somehow “moved” from there in the direction of being able to sustain life and finally of actually doing so. Wherever dyadic interactions result indirectly in a new condition that either moves the universe closer to being able to sustain life, or moves life itself in the direction not merely of sustaining itself but opening the way to new forms of life, we encounter a “thirdness” in nature of exactly the sort that semiosic triadicity alone can explain. This is the process, both within and without the living world, that requires scaffolding. This essay argues that a fuller understanding of this concept shows why “semiosis” says clearly what “evolution” says obscurely.",
"title": ""
},
{
"docid": "neg:1840199_8",
"text": "OBJECTIVE\nTo examine the association between absence of the nasal bone at the 11-14-week ultrasound scan and chromosomal defects.\n\n\nMETHODS\nUltrasound examination was carried out in 3829 fetuses at 11-14 weeks' gestation immediately before fetal karyotyping. At the scan the fetal crown-rump length (CRL) and nuchal translucency (NT) thickness were measured and the fetal profile was examined for the presence or absence of the nasal bone. Maternal characteristics including ethnic origin were also recorded.\n\n\nRESULTS\nThe fetal profile was successfully examined in 3788 (98.9%) cases. In 3358/3788 cases the fetal karyotype was normal and in 430 it was abnormal. In the chromosomally normal group the incidence of absent nasal bone was related firstly to the ethnic origin of the mother (2.8% for Caucasians, 10.4% for Afro-Caribbeans and 6.8% for Asians), secondly to fetal CRL (4.6% for CRL of 45-54 mm, 3.9% for CRL of 55-64 mm, 1.5% for CRL of 65-74 mm and 1.0% for CRL of 75-84 mm) and thirdly, to NT thickness, (1.8% for NT < 2.5 mm, 3.4% for NT 2.5-3.4 mm, 5.0% for NT 3.5-4.4 mm and 11.8% for NT > or = 4.5 mm. In the chromosomally abnormal group the nasal bone was absent in 161/242 (66.9%) with trisomy 21, in 48/84 (57.1%) with trisomy 18, in 7/22 (31.8%) with trisomy 13, in 3/34 (8.8%) with Turner syndrome and in 4/48 (8.3%) with other defects.\n\n\nCONCLUSION\nAt the 11-14-week scan the incidence of absent nasal bone is related to the presence or absence of chromosomal defects, CRL, NT thickness and ethnic origin.",
"title": ""
},
{
"docid": "neg:1840199_9",
"text": "Some models of textual corpora employ text generation methods involving n-gram statistics, while others use latent topic variables inferred using the \"bag-of-words\" assumption, in which word order is ignored. Previously, these methods have not been combined. In this work, I explore a hierarchical generative probabilistic model that incorporates both n-gram statistics and latent topic variables by extending a unigram topic model to include properties of a hierarchical Dirichlet bigram language model. The model hyperparameters are inferred using a Gibbs EM algorithm. On two data sets, each of 150 documents, the new model exhibits better predictive accuracy than either a hierarchical Dirichlet bigram language model or a unigram topic model. Additionally, the inferred topics are less dominated by function words than are topics discovered using unigram statistics, potentially making them more meaningful.",
"title": ""
},
{
"docid": "neg:1840199_10",
"text": "Existing studies on semantic parsing mainly focus on the in-domain setting. We formulate cross-domain semantic parsing as a domain adaptation problem: train a semantic parser on some source domains and then adapt it to the target domain. Due to the diversity of logical forms in different domains, this problem presents unique and intriguing challenges. By converting logical forms into canonical utterances in natural language, we reduce semantic parsing to paraphrasing, and develop an attentive sequence-to-sequence paraphrase model that is general and flexible to adapt to different domains. We discover two problems, small micro variance and large macro variance, of pretrained word embeddings that hinder their direct use in neural networks, and propose standardization techniques as a remedy. On the popular OVERNIGHT dataset, which contains eight domains, we show that both cross-domain training and standardized pre-trained word embedding can bring significant improvement.",
"title": ""
},
{
"docid": "neg:1840199_11",
"text": "Multi robot systems are envisioned to play an important role in many robotic applications. A main prerequisite for a team deployed in a wide unknown area is the capability of autonomously navigate, exploiting the information acquired through the on-line estimation of both robot poses and surrounding environment model, according to Simultaneous Localization And Mapping (SLAM) framework. As team coordination is improved, distributed techniques for filtering are required in order to enhance autonomous exploration and large scale SLAM increasing both efficiency and robustness of operation. Although Rao-Blackwellized Particle Filters (RBPF) have been demonstrated to be an effective solution to the problem of single robot SLAM, few extensions to teams of robots exist, and these approaches are characterized by strict assumptions on both communication bandwidth and prior knowledge on relative poses of the teammates. In the present paper we address the problem of multi robot SLAM in the case of limited communication and unknown relative initial poses. Starting from the well established single robot RBPF-SLAM, we propose a simple technique which jointly estimates SLAM posterior of the robots by fusing the prioceptive and the eteroceptive information acquired by each teammate. The approach intrinsically reduces the amount of data to be exchanged among the robots, while taking into account the uncertainty in relative pose measurements. Moreover it can be naturally extended to different communication technologies (bluetooth, RFId, wifi, etc.) regardless their sensing range. The proposed approach is validated through experimental test.",
"title": ""
},
{
"docid": "neg:1840199_12",
"text": "Object detection is a crucial task for autonomous driving. In addition to requiring high accuracy to ensure safety, object detection for autonomous driving also requires realtime inference speed to guarantee prompt vehicle control, as well as small model size and energy efficiency to enable embedded system deployment.,,,,,, In this work, we propose SqueezeDet, a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints. In our network we use convolutional layers not only to extract feature maps, but also as the output layer to compute bounding boxes and class probabilities. The detection pipeline of our model only contains a single forward pass of a neural network, thus it is extremely fast. Our model is fullyconvolutional, which leads to small model size and better energy efficiency. Finally, our experiments show that our model is very accurate, achieving state-of-the-art accuracy on the KITTI [10] benchmark. The source code of SqueezeDet is open-source released.",
"title": ""
},
{
"docid": "neg:1840199_13",
"text": "Support vector machines (SVMs) with the gaussian (RBF) kernel have been popular for practical use. Model selection in this class of SVMs involves two hyper parameters: the penalty parameter C and the kernel width . This letter analyzes the behavior of the SVM classifier when these hyper parameters take very small or very large values. Our results help in understanding the hyperparameter space that leads to an efficient heuristic method of searching for hyperparameter values with small generalization errors. The analysis also indicates that if complete model selection using the gaussian kernel has been conducted, there is no need to consider linear SVM.",
"title": ""
},
{
"docid": "neg:1840199_14",
"text": "Recently developed object detectors employ a convolutional neural network (CNN) by gradually increasing the number of feature layers with a pyramidal shape instead of using a featurized image pyramid. However, the different abstraction levels of CNN feature layers often limit the detection performance, especially on small objects. To overcome this limitation, we propose a CNN-based object detection architecture, referred to as a parallel feature pyramid (FP) network (PFPNet), where the FP is constructed by widening the network width instead of increasing the network depth. First, we adopt spatial pyramid pooling and some additional feature transformations to generate a pool of feature maps with different sizes. In PFPNet, the additional feature transformation is performed in parallel, which yields the feature maps with similar levels of semantic abstraction across the scales. We then resize the elements of the feature pool to a uniform size and aggregate their contextual information to generate each level of the final FP. The experimental results confirmed that PFPNet increases the performance of the latest version of the single-shot multi-box detector (SSD) by mAP of 6.4% AP and especially, 7.8% APsmall on the MS-COCO dataset.",
"title": ""
},
{
"docid": "neg:1840199_15",
"text": "The increased prevalence of cardiovascular disease among the aging population has prompted greater interest in the field of smart home monitoring and unobtrusive cardiac measurements. This paper introduces the design of a capacitive electrocardiogram (ECG) sensor that measures heart rate with no conscious effort from the user. The sensor consists of two active electrodes and an analog processing circuit that is low cost and customizable to the surfaces of common household objects. Prototype testing was performed in a home laboratory by embedding the sensor into a couch, walker, office and dining chairs. The sensor produced highly accurate heart rate measurements (<; 2.3% error) via either direct skin contact or through one and two layers of clothing. The sensor requires no gel dielectric and no grounding electrode, making it particularly suited to the “zero-effort” nature of an autonomous smart home environment. Motion artifacts caused by deviations in body contact with the electrodes were identified as the largest source of unreliability in continuous ECG measurements and will be a primary focus in the next phase of this project.",
"title": ""
},
{
"docid": "neg:1840199_16",
"text": "A fully differential architecture from the antenna to the integrated circuit is proposed for radio transceivers in this paper. The physical implementation of the architecture into truly single-chip radio transceivers is described for the first time. Two key building blocks, the differential antenna and the differential transmit-receive (T-R) switch, were designed, fabricated, and tested. The differential antenna implemented in a package in low-temperature cofired-ceramic technology achieved impedance bandwidth of 2%, radiation efficiency of 84%, and gain of 3.2 dBi at 5.425 GHz in a size of 15 x 15 x 1.6 mm3. The differential T-R switch in a standard complementary metal-oxide-semiconductor technology achieved 1.8-dB insertion loss, 15-dB isolation, and 15-dBm 1-dB power compression point (P 1dB) without using additional techniques to enhance the linearity at 5.425 GHz in a die area of 60 x 40 mum2.",
"title": ""
},
{
"docid": "neg:1840199_17",
"text": "Visual tracking using multiple features has been proved as a robust approach because features could complement each other. Since different types of variations such as illumination, occlusion, and pose may occur in a video sequence, especially long sequence videos, how to properly select and fuse appropriate features has become one of the key problems in this approach. To address this issue, this paper proposes a new joint sparse representation model for robust feature-level fusion. The proposed method dynamically removes unreliable features to be fused for tracking by using the advantages of sparse representation. In order to capture the non-linear similarity of features, we extend the proposed method into a general kernelized framework, which is able to perform feature fusion on various kernel spaces. As a result, robust tracking performance is obtained. Both the qualitative and quantitative experimental results on publicly available videos show that the proposed method outperforms both sparse representation-based and fusion based-trackers.",
"title": ""
},
{
"docid": "neg:1840199_18",
"text": "Drama, at least according to the Aristotelian view, is effective inasmuch as it successfully mirrors real aspects of human behavior. This leads to the hypothesis that successful dramas will portray fictional social networks that have the same properties as those typical of human beings across ages and cultures. We outline a methodology for investigating this hypothesis and use it to examine ten of Shakespeare's plays. The cliques and groups portrayed in the plays correspond closely to those which have been observed in spontaneous human interaction, including in hunter-gatherer societies, and the networks of the plays exhibit \"small world\" properties of the type which have been observed in many human-made and natural systems.",
"title": ""
},
{
"docid": "neg:1840199_19",
"text": "Short circuit protection remains one of the major technical barriers in DC microgrids. This paper reviews state of the art of DC solid state circuit breakers (SSCBs). A new concept of a self-powered SSCB using normally-on wideband gap (WBG) semiconductor devices as the main static switch is described in this paper. The new SSCB detects short circuit faults by sensing its terminal voltage rise, and draws power from the fault condition itself to turn and hold off the static switch. The new two-terminal SSCB can be directly placed in a circuit branch without requiring any external power supply or extra wiring. Challenges and future trends in protecting low voltage distribution microgrids against short circuit and other faults are discussed.",
"title": ""
}
] |
1840200 | 28-Gbaud PAM4 and 56-Gb/s NRZ Performance Comparison Using 1310-nm Al-BH DFB Laser | [
{
"docid": "pos:1840200_0",
"text": "Direct modulation at 56 and 50 Gb/s of 1.3-μm InGaAlAs ridge-shaped-buried heterostructure (RS-BH) asymmetric corrugation-pitch-modulation (ACPM) distributed feedback lasers is experimentally demonstrated. The fabricated lasers have a low threshold current (5.6 mA at 85°C), high temperature characteristics (71 K), high slope relaxation frequency (3.2 GHz/mA1/2 at 85°C), and wide bandwidth (22.1 GHz at 85°C). These superior properties enable the lasers to run at 56 Gb/s and 55°C and 50 Gb/s at up to 80°C for backto-back operation with clear eye openings. This is achieved by the combination of a low-leakage RS-BH and an ACPM grating. Moreover, successful transmission of 56and 50-Gb/s modulated signals over a 10-km standard single-mode fiber is achieved. These results confirm the suitability of this type of laser for use as a cost-effective light source in 400 GbE and OTU5 applications.",
"title": ""
}
] | [
{
"docid": "neg:1840200_0",
"text": "We present a method for automatically generating input parsers from English specifications of input file formats. We use a Bayesian generative model to capture relevant natural language phenomena and translate the English specification into a specification tree, which is then translated into a C++ input parser. We model the problem as a joint dependency parsing and semantic role labeling task. Our method is based on two sources of information: (1) the correlation between the text and the specification tree and (2) noisy supervision as determined by the success of the generated C++ parser in reading input examples. Our results show that our approach achieves 80.0% F-Score accuracy compared to an F-Score of 66.7% produced by a state-of-the-art semantic parser on a dataset of input format specifications from the ACM International Collegiate Programming Contest (which were written in English for humans with no intention of providing support for automated processing).1",
"title": ""
},
{
"docid": "neg:1840200_1",
"text": "The concept of an antipodal bipolar fuzzy graph of a given bipolar fuzzy graph is introduced. Characterizations of antipodal bipolar fuzzy graphs are presented when the bipolar fuzzy graph is complete or strong. Some isomorphic properties of antipodal bipolar fuzzy graph are discussed. The notion of self median bipolar fuzzy graphs of a given bipolar fuzzy graph is also introduced.",
"title": ""
},
{
"docid": "neg:1840200_2",
"text": "Many multimedia systems stream real-time visual data continuously for a wide variety of applications. These systems can produce vast amounts of data, but few studies take advantage of the versatile and real-time data. This paper presents a novel model based on the Convolutional Neural Networks (CNNs) to handle such imbalanced and heterogeneous data and successfully identifies the semantic concepts in these multimedia systems. The proposed model can discover the semantic concepts from the data with a skewed distribution using a dynamic sampling technique. The paper also presents a system that can retrieve real-time visual data from heterogeneous cameras, and the run-time environment allows the analysis programs to process the data from thousands of cameras simultaneously. The evaluation results in comparison with several state-of-the-art methods demonstrate the ability and effectiveness of the proposed model on visual data captured by public network cameras.",
"title": ""
},
{
"docid": "neg:1840200_3",
"text": "Genetic or acquired destabilization of the dermal extracellular matrix evokes injury- and inflammation-driven progressive soft tissue fibrosis. Dystrophic epidermolysis bullosa (DEB), a heritable human skin fragility disorder, is a paradigmatic disease to investigate these processes. Studies of DEB have generated abundant new information on cellular and molecular mechanisms at play in skin fibrosis which are not only limited to intractable diseases, but also applicable to some of the most common acquired conditions. Here, we discuss recent advances in understanding the biological and mechanical mechanisms driving the dermal fibrosis in DEB. Much of this progress is owed to the implementation of cell and tissue omics studies, which we pay special attention to. Based on the novel findings and increased understanding of the disease mechanisms in DEB, translational aspects and future therapeutic perspectives are emerging.",
"title": ""
},
{
"docid": "neg:1840200_4",
"text": "Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that complexity often takes the form of modularity in structure and functionality. Therefore, a hierarchical perspective can be essential to understanding complex ecological systems. But, how can such hierarchical approach help us with modeling spatially heterogeneous, nonlinear dynamic systems like landscapes, be they natural or human-dominated? In this paper, we present a spatially explicit hierarchical modeling approach to studying the patterns and processes of heterogeneous landscapes. We first discuss the theoretical basis for the modeling approach—the hierarchical patch dynamics (HPD) paradigm and the scaling ladder strategy, and then describe the general structure of a hierarchical urban landscape model (HPDM-PHX) which is developed using this modeling approach. In addition, we introduce a hierarchical patch dynamics modeling platform (HPD-MP), a software package that is designed to facilitate the development of spatial hierarchical models. We then illustrate the utility of HPD-MP through two examples: a hierarchical cellular automata model of land use change and a spatial multi-species population dynamics model. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840200_5",
"text": "While humans have an incredible capacity to acquire new skills and alter their behavior as a result of experience, enhancements in performance are typically narrowly restricted to the parameters of the training environment, with little evidence of generalization to different, even seemingly highly related, tasks. Such specificity is a major obstacle for the development of many real-world training or rehabilitation paradigms, which necessarily seek to promote more general learning. In contrast to these typical findings, research over the past decade has shown that training on 'action video games' produces learning that transfers well beyond the training task. This has led to substantial interest among those interested in rehabilitation, for instance, after stroke or to treat amblyopia, or training for various precision-demanding jobs, for instance, endoscopic surgery or piloting unmanned aerial drones. Although the predominant focus of the field has been on outlining the breadth of possible action-game-related enhancements, recent work has concentrated on uncovering the mechanisms that underlie these changes, an important first step towards the goal of designing and using video games for more definite purposes. Game playing may not convey an immediate advantage on new tasks (increased performance from the very first trial), but rather the true effect of action video game playing may be to enhance the ability to learn new tasks. Such a mechanism may serve as a signature of training regimens that are likely to produce transfer of learning.",
"title": ""
},
{
"docid": "neg:1840200_6",
"text": "In community question answering (cQA), the quality of answers are determined by the matching degree between question-answer pairs and the correlation among the answers. In this paper, we show that the dependency between the answer quality labels also plays a pivotal role. To validate the effectiveness of label dependency, we propose two neural network-based models, with different combination modes of Convolutional Neural Networks, Long Short Term Memory and Conditional Random Fields. Extensive experiments are taken on the dataset released by the SemEval-2015 cQA shared task. The first model is a stacked ensemble of the networks. It achieves 58.96% on macro averaged F1, which improves the state-of-the-art neural networkbased method by 2.82% and outperforms the Top-1 system in the shared task by 1.77%. The second is a simple attention-based model whose input is the connection of the question and its corresponding answers. It produces promising results with 58.29% on overall F1 and gains the best performance on the Good and Bad categories.",
"title": ""
},
{
"docid": "neg:1840200_7",
"text": "This paper addresses the problem of automatic player identification in broadcast sports videos filmed with a single side-view medium distance camera. Player identification in this setting is a challenging task because visual cues such as faces and jersey numbers are not clearly visible. Thus, this task requires sophisticated approaches to capture distinctive features from players to distinguish them. To this end, we use Convolutional Neural Networks (CNN) features extracted at multiple scales and encode them with an advanced pooling, called Fisher vector. We leverage it for exploring representations that have sufficient discriminatory power and ability to magnify subtle differences. We also analyze the distinguishing parts of the players and present a part based pooling approach to use these distinctive feature points. The resulting player representation is able to identify players even in difficult scenes. It achieves state-of-the-art results up to 96% on NBA basketball clips.",
"title": ""
},
{
"docid": "neg:1840200_8",
"text": "Modern robotic systems tend to get more complex sensors at their disposal, resulting in complex algorithms to process their data. For example, camera images are being used map their environment and plan their route. On the other hand, the robotic systems are becoming mobile more often and need to be as energy-efficient as possible; quadcopters are an example of this. These two trends interfere with each other: Data-intensive, complex algorithms require a lot of processing power, which is in general not energy-friendly nor mobile-friendly. In this paper, we describe how to move the complex algorithms to a computing platform that is not part of the mobile part of the setup, i.e. to offload the processing part to a base station. We use the ROS framework for this, as ROS provides a lot of existing computation solutions. On the mobile part of the system, our hard real-time execution framework, called LUNA, is used, to make it possible to run the loop controllers on it. The design of a `bridge node' is explained, which is used to connect the LUNA framework to ROS. The main issue to tackle is to subscribe to an arbitrary ROS topic at run-time, instead of defining the ROS topics at compile-time. Furthermore, it is shown that this principle is working and the requirements of network bandwidth are discussed.",
"title": ""
},
{
"docid": "neg:1840200_9",
"text": "Article history: Received 31 December 2007 Received in revised form 12 December 2008 Accepted 3 January 2009",
"title": ""
},
{
"docid": "neg:1840200_10",
"text": "A frequency-reconfigurable microstrip slot antenna is proposed. The antenna is capable of frequency switching at six different frequency bands between 2.2 and 4.75 GHz. Five RF p-i-n diode switches are positioned in the slot to achieve frequency reconfigurability. The feed line and the slot are bended to reduce 33% of the original size of the antenna. The biasing circuit is integrated into the ground plane to minimize the parasitic effects toward the performance of the antenna. Simulated and measured results are used to demonstrate the performance of the antenna. The simulated and measured return losses, together with the radiation patterns, are presented and compared.",
"title": ""
},
{
"docid": "neg:1840200_11",
"text": "Switching common-mode voltage (CMV) generated by the pulse width modulation (PWM) of the inverter causes common-mode currents, which lead to motor bearing failures and electromagnetic interference problems in multiphase drives. Such switching CMV can be reduced by taking advantage of the switching states of multilevel multiphase inverters that produce zero CMV. Specific space-vector PWM (SVPWM) techniques with CMV elimination, which only use zero CMV states, have been proposed for three-level five-phase drives, and for open-end winding five-, six-, and seven-phase drives, but such methods cannot be extended to a higher number of levels or phases. This paper presents a general (for any number of levels and phases) SVPMW with CMV elimination. The proposed technique can be applied to most multilevel topologies, has low computational complexity and is suitable for low-cost hardware implementations. The new algorithm is implemented in a low-cost field-programmable gate array and it is successfully tested in the laboratory using a five-level five-phase motor drive.",
"title": ""
},
{
"docid": "neg:1840200_12",
"text": "Recent experiments indicate the need for revision of a model of spatial memory consisting of viewpoint-specific representations, egocentric spatial updating and a geometric module for reorientation. Instead, it appears that both egocentric and allocentric representations exist in parallel, and combine to support behavior according to the task. Current research indicates complementary roles for these representations, with increasing dependence on allocentric representations with the amount of movement between presentation and retrieval, the number of objects remembered, and the size, familiarity and intrinsic structure of the environment. Identifying the neuronal mechanisms and functional roles of each type of representation, and of their interactions, promises to provide a framework for investigation of the organization of human memory more generally.",
"title": ""
},
{
"docid": "neg:1840200_13",
"text": "Nostalgia is a psychological phenomenon we all can relate to but have a hard time to define. What characterizes the mental state of feeling nostalgia? What psychological function does it serve? Different published materials in a wide range of fields, from consumption research and sport science to clinical psychology, psychoanalysis and sociology, all have slightly different definition of this mental experience. Some claim it is a psychiatric disease giving melancholic emotions to a memory you would consider a happy one, while others state it enforces positivity in our mood. First in this paper a thorough review of the history of nostalgia is presented, then a look at the body of contemporary nostalgia research to see what it could be constituted of. Finally, we want to dig even deeper to see what is suggested by the literature in terms of triggers and functions. Some say that digitally recorded material like music and videos has a potential nostalgic component, which could trigger a reflection of the past in ways that was difficult before such inventions. Hinting towards that nostalgia as a cultural phenomenon is on a rising scene. Some authors say that odors have the strongest impact on nostalgic reverie due to activating it without too much cognitive appraisal. Cognitive neuropsychology has shed new light on a lot of human psychological phenomena‘s and even though empirical testing have been scarce in this field, it should get a fair scrutiny within this perspective as well and hopefully helping to clarify the definition of the word to ease future investigations, both scientifically speaking and in laymen‘s retro hysteria.",
"title": ""
},
{
"docid": "neg:1840200_14",
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"title": ""
},
{
"docid": "neg:1840200_15",
"text": "This paper explores patterns of adoption and use of information and communications technology (ICT) by small and medium sized enterprises (SMEs) in the southwest London and Thames Valley region of England. The paper presents preliminary results of a survey of around 400 SMEs drawn from four economically significant sectors in the region: food processing, transport and logistics, media and Internet services. The main objectives of the study were to explore ICT adoption and use patterns by SMEs, to identify factors enabling or inhibiting the successful adoption and use of ICT, and to explore the effectiveness of government policy mechanisms at national and regional levels. While our main result indicates a generally favourable attitude to ICT amongst the SMEs surveyed, it also suggests a failure to recognise ICT’s strategic potential. A surprising result was the overwhelming ignorance of regional, national and European Union wide policy initiatives to support SMEs. This strikes at the very heart of regional, national and European policy that have identified SMEs as requiring specific support mechanisms. Our findings from one of the UK’s most productive regions therefore have important implications for policy aimed at ICT adoption and use by SMEs.",
"title": ""
},
{
"docid": "neg:1840200_16",
"text": "The past decade has witnessed an increasing adoption of cloud database technology, which provides better scalability, availability, and fault-tolerance via transparent partitioning and replication, and automatic load balancing and fail-over. However, only a small number of cloud databases provide strong consistency guarantees for distributed transactions, despite decades of research on distributed transaction processing, due to practical challenges that arise in the cloud setting, where failures are the norm, and human administration is minimal. For example, dealing with locks left by transactions initiated by failed machines, and determining a multi-programming level that avoids thrashing without under-utilizing available resources, are some of the challenges that arise when using lock-based transaction processing mechanisms in the cloud context. Even in the case of optimistic concurrency control, most proposals in the literature deal with distributed validation but still require the database to acquire locks during two-phase commit when installing updates of a single transaction on multiple machines. Very little theoretical work has been done to entirely eliminate the need for locking in distributed transactions, including locks acquired during two-phase commit. In this paper, we re-design optimistic concurrency control to eliminate any need for locking even for atomic commitment, while handling the practical issues in earlier theoretical work related to this problem. We conduct an extensive experimental study to evaluate our approach against lock-based methods under various setups and workloads, and demonstrate that our approach provides many practical advantages in the cloud context.",
"title": ""
},
{
"docid": "neg:1840200_17",
"text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.",
"title": ""
},
{
"docid": "neg:1840200_18",
"text": "This paper presents a new optimization-based method to control three micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. Our control strategy arises from physics that apply force in the negative direction of states errors. The objective is to regulate the inter-agent spacing, heading and position of the set of agents, for motion in two dimensions, while the system is inherently underactuated. Simulation results on three agents and a proof-of-concept experiment on two agents show the feasibility of the idea to shed light on future micro/nanoscale multi-agent explorations. Average tracking error of less than 50 micrometers and 1.85 degrees is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical spherical-shape agents with nominal radius less than of 250 micrometers operating within several body-lengths of each other.",
"title": ""
},
{
"docid": "neg:1840200_19",
"text": "We present a framework and algorithm to analyze first person RGBD videos captured from the robot while physically interacting with humans. Specifically, we explore reactions and interactions of persons facing a mobile robot from a robot centric view. This new perspective offers social awareness to the robots, enabling interesting applications. As far as we know, there is no public 3D dataset for this problem. Therefore, we record two multi-modal first-person RGBD datasets that reflect the setting we are analyzing. We use a humanoid and a non-humanoid robot equipped with a Kinect. Notably, the videos contain a high percentage of ego-motion due to the robot self-exploration as well as its reactions to the persons' interactions. We show that separating the descriptors extracted from ego-motion and independent motion areas, and using them both, allows us to achieve superior recognition results. Experiments show that our algorithm recognizes the activities effectively and outperforms other state-of-the-art methods on related tasks.",
"title": ""
}
] |
1840201 | Approximate Note Transcription for the Improved Identification of Difficult Chords | [
{
"docid": "pos:1840201_0",
"text": "This article proposes a method for the automatic transcription of the melody, bass line, and chords in polyphonic pop music. The method uses a frame-wise pitch-salience estimator as a feature extraction front-end. For the melody and bass-line transcription, this is followed by acoustic modeling of note events and musicological modeling of note transitions. The acoustic models include a model for the target notes (i.e., melody or bass notes) and a background model. The musicological model involves key estimation and note bigrams that determine probabilities for transitions between target notes. A transcription of the melody or the bass line is obtained using Viterbi search via the target and the background note models. The performance of the melody and the bass-line transcription is evaluated using approximately 8.5 hours of realistic polyphonic music. The chord transcription maps the pitch salience estimates to a pitch-class representation and uses trained chord models and chord-transition probabilities to produce a transcription consisting of major and minor triads. For chords, the evaluation material consists of the first eight Beatles albums. The method is computationally efficient and allows causal implementation, so it can process streaming audio. Transcription of music refers to the analysis of an acoustic music signal for producing a parametric representation of the signal. The representation may be a music score with a meticulous arrangement for each instrument or an approximate description of melody and chords in the piece, for example. The latter type of transcription is commonly used in commercial songbooks of pop music and is usually sufficient for musicians or music hobbyists to play the piece. On the other hand, more detailed transcriptions are often employed in classical music to preserve the exact arrangement of the composer.",
"title": ""
}
] | [
{
"docid": "neg:1840201_0",
"text": "BACKGROUND\nBurnout is a major issue among medical students. Its general characteristics are loss of interest in study and lack of motivation. A study of the phenomenon must extend beyond the university environment and personality factors to consider whether career choice has a role in the occurrence of burnout.\n\n\nMETHODS\nQuantitative, national survey (n = 733) among medical students, using a 12-item career motivation list compiled from published research results and a pilot study. We measured burnout by the validated Hungarian version of MBI-SS.\n\n\nRESULTS\nThe most significant career choice factor was altruistic motivation, followed by extrinsic motivations: gaining a degree, finding a job, accessing career opportunities. Lack of altruism was found to be a major risk factor, in addition to the traditional risk factors, for cynicism and reduced academic efficacy. Our study confirmed the influence of gender differences on both career choice motivations and burnout.\n\n\nCONCLUSION\nThe structure of career motivation is a major issue in the transformation of the medical profession. Since altruism is a prominent motivation for many women studying medicine, their entry into the profession in increasing numbers may reinforce its traditional character and act against the present trend of deprofessionalization.",
"title": ""
},
{
"docid": "neg:1840201_1",
"text": "Performance and high availability have become increasingly important drivers, amongst other drivers, for user retention in the context of web services such as social networks, and web search. Exogenic and/or endogenic factors often give rise to anomalies, making it very challenging to maintain high availability, while also delivering high performance. Given that service-oriented architectures (SOA) typically have a large number of services, with each service having a large set of metrics, automatic detection of anomalies is nontrivial. Although there exists a large body of prior research in anomaly detection, existing techniques are not applicable in the context of social network data, owing to the inherent seasonal and trend components in the time series data. To this end, we developed two novel statistical techniques for automatically detecting anomalies in cloud infrastructure data. Specifically, the techniques employ statistical learning to detect anomalies in both application, and system metrics. Seasonal decomposition is employed to filter the trend and seasonal components of the time series, followed by the use of robust statistical metrics – median and median absolute deviation (MAD) – to accurately detect anomalies, even in the presence of seasonal spikes. We demonstrate the efficacy of the proposed techniques from three different perspectives, viz., capacity planning, user behavior, and supervised learning. In particular, we used production data for evaluation, and we report Precision, Recall, and F-measure in each case.",
"title": ""
},
{
"docid": "neg:1840201_2",
"text": "Circularly polarized (CP) dielectric resonator antenna (DRA) subarrays have been numerically studied and experimentally verified. Elliptical CP DRA is used as the antenna element, which is excited by either a narrow slot or a probe. The elements are arranged in a 2 by 2 subarray configuration and are excited sequentially. In order to optimize the CP bandwidth, wideband feeding networks have been designed. Three different types of feeding network are studied; they are parallel feeding network, series feeding network and hybrid ring feeding network. For the CP DRA subarray with hybrid ring feeding network, the impedance matching bandwidth (S11<-10 dB) and 3-dB AR bandwidth achieved are 44% and 26% respectively",
"title": ""
},
{
"docid": "neg:1840201_3",
"text": "Since 2013, a stream of disclosures has prompted reconsideration of surveillance law and policy. One of the most controversial principles, both in the United States and abroad, is that communications metadata receives substantially less protection than communications content. Several nations currently collect telephone metadata in bulk, including on their own citizens. In this paper, we attempt to shed light on the privacy properties of telephone metadata. Using a crowdsourcing methodology, we demonstrate that telephone metadata is densely interconnected, can trivially be reidentified, and can be used to draw sensitive inferences.",
"title": ""
},
{
"docid": "neg:1840201_4",
"text": "The ranking of n objects based on pairwise comparisons is a core machine learning problem, arising in recommender systems, ad placement, player ranking, biological applications and others. In many practical situations the true pairwise comparisons cannot be actively measured, but a subset of all n(n−1)/2 comparisons is passively and noisily observed. Optimization algorithms (e.g., the SVM) could be used to predict a ranking with fixed expected Kendall tau distance, while achieving an Ω(n) lower bound on the corresponding sample complexity. However, due to their centralized structure they are difficult to extend to online or distributed settings. In this paper we show that much simpler algorithms can match the same Ω(n) lower bound in expectation. Furthermore, if an average of O(n log(n)) binary comparisons are measured, then one algorithm recovers the true ranking in a uniform sense, while the other predicts the ranking more accurately near the top than the bottom. We discuss extensions to online and distributed ranking, with benefits over traditional alternatives.",
"title": ""
},
{
"docid": "neg:1840201_5",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "neg:1840201_6",
"text": "Technology in football has been debated by pundits, players and fans all over the world for the past decade. FIFA has recently commissioned the use of ‘Hawk-Eye’ and ‘Goal Ref’ goal line technology systems at the 2014 World Cup in Brazil. This paper gives an in depth evaluation of the possible technologies that could be used in football and determines the potential benefits and implications these systems could have on the officiating of football matches. The use of technology in other sports is analyzed to come to a conclusion as to whether officiating technology should be used in football. Will football be damaged by the loss of controversial incidents such as Frank Lampard’s goal against Germany at the 2010 World Cup? Will cost, accuracy and speed continue to prevent the use of officiating technology in football? Time will tell, but for now, any advancement in the use of technology in football will be met by some with discontent, whilst others see it as moving the sport into the 21 century.",
"title": ""
},
{
"docid": "neg:1840201_7",
"text": "Unsupervised semantic segmentation in the time series domain is a much-studied problem due to its potential to detect unexpected regularities and regimes in poorly understood data. However, the current techniques have several shortcomings, which have limited the adoption of time series semantic segmentation beyond academic settings for three primary reasons. First, most methods require setting/learning many parameters and thus may have problems generalizing to novel situations. Second, most methods implicitly assume that all the data is segmentable, and have difficulty when that assumption is unwarranted. Finally, most research efforts have been confined to the batch case, but online segmentation is clearly more useful and actionable. To address these issues, we present an algorithm which is domain agnostic, has only one easily determined parameter, and can handle data streaming at a high rate. In this context, we test our algorithm on the largest and most diverse collection of time series datasets ever considered, and demonstrate our algorithm's superiority over current solutions. Furthermore, we are the first to show that semantic segmentation may be possible at superhuman performance levels.",
"title": ""
},
{
"docid": "neg:1840201_8",
"text": "DISCOVER operates on relational databases and facilitates information discovery on them by allowing its user to issue keyword queries without any knowledge of the database schema or of SQL. DISCOVER returns qualified joining networks of tuples, that is, sets of tuples that are associated because they join on their primary and foreign keys and collectively contain all the keywords of the query. DISCOVER proceeds in two steps. First the Candidate Network Generator generates all candidate networks of relations, that is, join expressions that generate the joining networks of tuples. Then the Plan Generator builds plans for the efficient evaluation of the set of candidate networks, exploiting the opportunities to reuse common subexpressions of the candidate networks. We prove that DISCOVER finds without redundancy all relevant candidate networks, whose size can be data bound, by exploiting the structure of the schema. We prove that the selection of the optimal execution plan (way to reuse common subexpressions) is NP-complete. We provide a greedy algorithm and we show that it provides near-optimal plan execution time cost. Our experimentation also provides hints on tuning the greedy algorithm.",
"title": ""
},
{
"docid": "neg:1840201_9",
"text": "Although pricing fraud is an important issue for improving service quality of online shopping malls, research on automatic fraud detection has been limited. In this paper, we propose an unsupervised learning method based on a finite mixture model to identify pricing frauds. We consider two states, normal and fraud, for each item according to whether an item description is relevant to its price by utilizing the known number of item clusters. Two states of an observed item are modeled as hidden variables, and the proposed models estimate the state by using an expectation maximization (EM) algorithm. Subsequently, we suggest a special case of the proposed model, which is applicable when the number of item clusters is unknown. The experiment results show that the proposed models are more effective in identifying pricing frauds than the existing outlier detection methods. Furthermore, it is presented that utilizing the number of clusters is helpful in facilitating the improvement of pricing fraud detection",
"title": ""
},
{
"docid": "neg:1840201_10",
"text": "Ghrelin increases non-REM sleep and decreases REM sleep in young men but does not affect sleep in young women. In both sexes, ghrelin stimulates the activity of the somatotropic and the hypothalamic-pituitary-adrenal (HPA) axis, as indicated by increased growth hormone (GH) and cortisol plasma levels. These two endocrine axes are crucially involved in sleep regulation. As various endocrine effects are age-dependent, aim was to study ghrelin's effect on sleep and secretion of GH and cortisol in elderly humans. Sleep-EEGs (2300-0700 h) and secretion profiles of GH and cortisol (2000-0700 h) were determined in 10 elderly men (64.0+/-2.2 years) and 10 elderly, postmenopausal women (63.0+/-2.9 years) twice, receiving 50 microg ghrelin or placebo at 2200, 2300, 0000, and 0100 h, in this single-blind, randomized, cross-over study. In men, ghrelin compared to placebo was associated with significantly more stage 2 sleep (placebo: 183.3+/-6.1; ghrelin: 221.0+/-12.2 min), slow wave sleep (placebo: 33.4+/-5.1; ghrelin: 44.3+/-7.7 min) and non-REM sleep (placebo: 272.6+/-12.8; ghrelin: 318.2+/-11.0 min). Stage 1 sleep (placebo: 56.9+/-8.7; ghrelin: 50.9+/-7.6 min) and REM sleep (placebo: 71.9+/-9.1; ghrelin: 52.5+/-5.9 min) were significantly reduced. Furthermore, delta power in men was significantly higher and alpha power and beta power were significantly lower after ghrelin than after placebo injection during the first half of night. In women, no effects on sleep were observed. In both sexes, ghrelin caused comparable increases and secretion patterns of GH and cortisol. In conclusion, ghrelin affects sleep in elderly men but not women resembling findings in young subjects.",
"title": ""
},
{
"docid": "neg:1840201_11",
"text": "In recent years, Wireless Sensor Networks (WSNs) have emerged as a new powerful technology used in many applications such as military operations, surveillance system, Intelligent Transport Systems (ITS) etc. These networks consist of many Sensor Nodes (SNs), which are not only used for monitoring but also capturing the required data from the environment. Most of the research proposals on WSNs have been developed keeping in view of minimization of energy during the process of extracting the essential data from the environment where SNs are deployed. The primary reason for this is the fact that the SNs are operated on battery which discharges quickly after each operation. It has been found in literature that clustering is the most common technique used for energy aware routing in WSNs. The most popular protocol for clustering in WSNs is Low Energy Adaptive Clustering Hierarchy (LEACH) which is based on adaptive clustering technique. This paper provides the taxonomy of various clustering and routing techniques in WSNs based upon metrics such as power management, energy management, network lifetime, optimal cluster head selection, multihop data transmission etc. A comprehensive discussion is provided in the text highlighting the relative advantages and disadvantages of many of the prominent proposals in this category which helps the designers to select a particular proposal based upon its merits over the others. & 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840201_12",
"text": "The aim of this systematic review was to compare the clinical performance and failure modes of teeth restored with intra-radicular retainers. A search was performed on PubMed/Medline, Central and ClinicalTrials databases for randomized clinical trials comparing clinical behavior and failures of at least two types of retainers. From 341 detected papers, 16 were selected for full-text analysis, of which 9 met the eligibility criteria. A manual search added 2 more studies, totalizing 11 studies that were included in this review. Evaluated retainers were fiber (prefabricated and customized) and metal (prefabricated and cast) posts, and follow-up ranged from 6 months to 10 years. Most studies showed good clinical behavior for evaluated intra-radicular retainers. Reported survival rates varied from 71 to 100% for fiber posts and 50 to 97.1% for metal posts. Studies found no difference in the survival among different metal posts and most studies found no difference between fiber and metal posts. Two studies also showed that remaining dentine height, number of walls and ferrule increased the longevity of the restored teeth. Failures of fiber posts were mainly due to post loss of retention, while metal post failures were mostly related to root fracture, post fracture and crown and/or post loss of retention. In conclusion, metal and fiber posts present similar clinical behavior at short to medium term follow-up. Remaining dental structure and ferrule increase the survival of restored pulpless teeth. Studies with longer follow-up are needed.",
"title": ""
},
{
"docid": "neg:1840201_13",
"text": "Real-time environment monitoring and analysis is an important research area of Internet of Things (IoT). Understanding the behavior of the complex ecosystem requires analysis of detailed observations of an environment over a range of different conditions. One such example in urban areas includes the study of tree canopy cover over the microclimate environment using heterogeneous sensor data. There are several challenges that need to be addressed, such as obtaining reliable and detailed observations over monitoring area, detecting unusual events from data, and visualizing events in real-time in a way that is easily understandable by the end users (e.g., city councils). In this regard, we propose an integrated geovisualization framework, built for real-time wireless sensor network data on the synergy of computational intelligence and visual methods, to analyze complex patterns of urban microclimate. A Bayesian maximum entropy-based method and a hyperellipsoidal model-based algorithm have been build in our integrated framework to address above challenges. The proposed integrated framework was verified using the dataset from an indoor and two outdoor network of IoT devices deployed at two strategically selected locations in Melbourne, Australia. The data from these deployments are used for evaluation and demonstration of these components’ functionality along with the designed interactive visualization components.",
"title": ""
},
{
"docid": "neg:1840201_14",
"text": "In this paper, a high efficiency and high power factor single-stage balanced forward-flyback converter merging a foward and flyback converter topologies is proposed. The conventional AC/DC flyback converter can achieve a good power factor but it has a high offset current through the transformer magnetizing inductor, which results in a large core loss and low power conversion efficiency. And, the conventional forward converter can achieve the good power conversion efficiency with the aid of the low core loss but the input current dead zone near zero cross AC input voltage deteriorates the power factor. On the other hand, since the proposed converter can operate as the forward and flyback converters during switch on and off periods, respectively, it cannot only perform the power transfer during an entire switching period but also achieve the high power factor due to the flyback operation. Moreover, since the current balanced capacitor can minimize the offset current through the transformer magnetizing inductor regardless of the AC input voltage, the core loss and volume of the transformer can be minimized. Therefore, the proposed converter features a high efficiency and high power factor. To confirm the validity of the proposed converter, theoretical analysis and experimental results from a prototype of 24W LED driver are presented.",
"title": ""
},
{
"docid": "neg:1840201_15",
"text": "Wireless sensor networks (WSNs) will play an active role in the 21th Century Healthcare IT to reduce the healthcare cost and improve the quality of care. The protection of data confidentiality and patient privacy are the most critical requirements for the ubiquitous use of WSNs in healthcare environments. This requires a secure and lightweight user authentication and access control. Symmetric key based access control is not suitable for WSNs in healthcare due to dynamic network topology, mobility, and stringent resource constraints. In this paper, we propose a secure, lightweight public key based security scheme, Mutual Authentication and Access Control based on Elliptic curve cryptography (MAACE). MAACE is a mutual authentication protocol where a healthcare professional can authenticate to an accessed node (a PDA or medical sensor) and vice versa. This is to ensure that medical data is not exposed to an unauthorized person. On the other hand, it ensures that medical data sent to healthcare professionals did not originate from a malicious node. MAACE is more scalable and requires less memory compared to symmetric key-based schemes. Furthermore, it is much more lightweight than other public key-based schemes. Security analysis and performance evaluation results are presented and compared to existing schemes to show advantages of the proposed scheme.",
"title": ""
},
{
"docid": "neg:1840201_16",
"text": "Overlay architectures are programmable logic systems that are compiled on top of a traditional FPGA. These architectures give designers flexibility, and have a number of benefits, such as being designed or optimized for specific application domains, making it easier or more efficient to implement solutions, being independent of platform, allowing the ability to do partial reconfiguration regardless of the underlying architecture, and allowing compilation without using vendor tools, in some cases with fully open source tool chains. This thesis describes the implementation of two FPGA overlay architectures, ZUMA and CARBON. These overlay implementations include optimizations to reduce area and increase speed which may be applicable to many other FPGAs and also ASIC systems. ZUMA is a fine-grain overlay which resembles a modern commercial FPGA, and is compatible with the VTR open source compilation tools. The implementation includes a number of novel features tailored to efficient FPGA implementation, including the utilization of reprogrammable LUTRAMs, a novel two-stage local routing crossbar, and an area efficient configuration controller. CARBON",
"title": ""
},
{
"docid": "neg:1840201_17",
"text": "As the cost of human full genome sequencing continues to fall, we will soon witness a prodigious amount of human genomic data in the public cloud. To protect the confidentiality of the genetic information of individuals, the data has to be encrypted at rest. On the other hand, encryption severely hinders the use of this valuable information, such as Genome-wide Range Query (GRQ), in medical/genomic research. While the problem of secure range query on outsourced encrypted data has been extensively studied, the current schemes are far from practical deployment in terms of efficiency and scalability due to the data volume in human genome sequencing. In this paper, we investigate the problem of secure GRQ over human raw aligned genomic data in a third-party outsourcing model. Our solution contains a novel secure range query scheme based on multi-keyword symmetric searchable encryption (MSSE). The proposed scheme incurs minimal ciphertext expansion and computation overhead. We also present a hierarchical GRQ-oriented secure index structure tailored for efficient and large-scale genomic data lookup in the cloud while preserving the query privacy. Our experiment on real human genomic data shows that a secure GRQ request with range size 100,000 over more than 300 million encrypted short reads takes less than 3 minutes, which is orders of magnitude faster than existing solutions.",
"title": ""
},
{
"docid": "neg:1840201_18",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
},
{
"docid": "neg:1840201_19",
"text": "Neuroimage analysis usually involves learning thousands or even millions of variables using only a limited number of samples. In this regard, sparse models, e.g. the lasso, are applied to select the optimal features and achieve high diagnosis accuracy. The lasso, however, usually results in independent unstable features. Stability, a manifest of reproducibility of statistical results subject to reasonable perturbations to data and the model (Yu 2013), is an important focus in statistics, especially in the analysis of high dimensional data. In this paper, we explore a nonnegative generalized fused lasso model for stable feature selection in the diagnosis of Alzheimer’s disease. In addition to sparsity, our model incorporates two important pathological priors: the spatial cohesion of lesion voxels and the positive correlation between the features and the disease labels. To optimize the model, we propose an efficient algorithm by proving a novel link between total variation and fast network flow algorithms via conic duality. Experiments show that the proposed nonnegative model performs much better in exploring the intrinsic structure of data via selecting stable features compared with other state-of-the-arts. Introduction Neuroimage analysis is challenging due to its high feature dimensionality and data scarcity. Sparse models such as the lasso (Tibshirani 1996) have gained great reputation in statistics and machine learning, and they have been applied to the analysis of such high dimensional data by exploiting the sparsity property in the absence of abundant data. As a major result, automatic selection of relevant variables/features by such sparse formulation achieves promising performance. For example, in (Liu, Zhang, and Shen 2012), the lasso model was applied to the diagnosis of Alzheimer’s disease (AD) and showed better performance than the support vector machine (SVM), which is one of the state-of-the-arts in brain image classification. However, in statistics, it is known that the lasso does not always provide interpretable results because of its instability (Yu 2013). “Stability” here means the reproducibility of statistical results subject to reasonable perturbations to data and Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. the model. (These perturbations include the often used Jacknife, bootstrap and cross-validation.) This unstable behavior of the lasso model is critical in high dimensional data analysis. The resulting irreproducibility of the feature selection are especially undesirable in neuroimage analysis/diagnosis. However, unlike the problems such as registration and classification, the stability issue of feature selection is much less studied in this field. In this paper we propose a model to induce more stable feature selection from high dimensional brain structural Magnetic Resonance Imaging (sMRI) images. Besides sparsity, the proposed model harnesses two important additional pathological priors in brain sMRI: (i) the spatial cohesion of lesion voxels (via inducing fusion terms) and (ii) the positive correlation between the features and the disease labels. The correlation prior is based on the observation that in many brain image analysis problems (such as AD, frontotemporal dementia, corticobasal degeneration, etc), there exist strong correlations between the features and the labels. For example, gray matter of AD is degenerated/atrophied. Therefore, the gray matter values (indicating the volume) are positively correlated with the cognitive scores or disease labels {-1,1}. That is, the less gray matter, the lower the cognitive score. Accordingly, we propose nonnegative constraints on the variables to enforce the prior and name the model as “non-negative Generalized Fused Lasso” (nGFL). It extends the popular generalized fused lasso and enables it to explore the intrinsic structure of data via selecting stable features. To measure feature stability, we introduce the “Estimation Stability” recently proposed in (Yu 2013) and the (multi-set) Dice coefficient (Dice 1945). Experiments demonstrate that compared with existing models, our model selects much more stable (and pathological-prior consistent) voxels. It is worth mentioning that the non-negativeness per se is a very important prior of many practical problems, e.g. (Lee and Seung 1999). Although nGFL is proposed to solve the diagnosis of AD in this work, the model can be applied to more general problems. Incorporating these priors makes the problem novel w.r.t the lasso or generalized fused lasso from an optimization standpoint. Although off-the-shelf convex solvers such as CVX (Grant and Boyd 2013) can be applied to solve the optimization, it hardly scales to high-dimensional problems in feasible time. In this regard, we propose an efficient algoProceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence",
"title": ""
}
] |
1840202 | Unsupervised Learning of Visual Representations using Videos | [
{
"docid": "pos:1840202_0",
"text": "Our Approach, 0.66 GIST 29.7 Spa>al Pyramid HOG 29.8 Spa>al Pyramid SIFT 34.4 ROI-‐GIST 26.5 Scene DPM 30.4 MM-‐Scene 28.0 Object Bank 37.6 Ours 38.1 Ours+GIST 44.0 Ours+SP 46.4 Ours+GIST+SP 47.5 Ours+DPM 42.4 Ours+GIST+DPM 46.9 Ours+SP+DPM 46.4 GIST+SP+DPM 43.1 Ours+GIST+SP+DPM 49.4 Two key requirements • representa,ve: Need to occur frequently enough • discrimina,ve: Need to be different enough from the rest of the “visual world” Goal: a mid-‐level visual representa>on Experimental Analysis Bonus: works even be`er if weakly supervised!",
"title": ""
}
] | [
{
"docid": "neg:1840202_0",
"text": "We present an incremental polynomial-time algorithm for enumerating all circuits of a matroid or, more generally, all minimal spanning sets for a flat. We also show the NP-hardness of several related enumeration problems. †RUTCOR, Rutgers University, 640 Bartholomew Road, Piscataway NJ 08854-8003; ({boros,elbassio,gurvich}@rutcor.rutgers.edu). ‡Department of Computer Science, Rutgers University, 110 Frelinghuysen Road, Piscataway NJ 08854-8003; (leonid@cs.rutgers.edu). ∗This research was supported in part by the National Science Foundation Grant IIS0118635. The research of the first and third authors was also supported in part by the Office of Naval Research Grant N00014-92-J-1375. The second and third authors are also grateful for the partial support by DIMACS, the National Science Foundation’s Center for Discrete Mathematics and Theoretical Computer Science.",
"title": ""
},
{
"docid": "neg:1840202_1",
"text": "Web personalization systems are used to enhance the user experience by providing tailor-made services based on the user’s interests and preferences which are typically stored in user profiles. For such systems to remain effective, the profiles need to be able to adapt and reflect the users’ changing behaviour. In this paper, we introduce a set of methods designed to capture and track user interests and maintain dynamic user profiles within a personalization system. User interests are represented as ontological concepts which are constructed by mapping web pages visited by a user to a reference ontology and are subsequently used to learn short-term and long-term interests. A multi-agent system facilitates and coordinates the capture, storage, management and adaptation of user interests. We propose a search system that utilizes our dynamic user profile to provide a personalized search experience. We present a series of experiments that show how our system can effectively model a dynamic user profile and is capable of learning and adapting to different user browsing behaviours.",
"title": ""
},
{
"docid": "neg:1840202_2",
"text": "As humans are being progressively pushed further downstream in the decision-making process of autonomous systems, the need arises to ensure that moral standards, however defined, are adhered to by these robotic artifacts. While meaningful inroads have been made in this area regarding the use of ethical lethal military robots, including work by our laboratory, these needs transcend the warfighting domain and are pervasive, extending to eldercare, robot nannies, and other forms of service and entertainment robotic platforms. This paper presents an overview of the spectrum and specter of ethical issues raised by the advent of these systems, and various technical results obtained to date by our research group, geared towards managing ethical behavior in autonomous robots in relation to humanity. This includes: 1) the use of an ethical governor capable of restricting robotic behavior to predefined social norms; 2) an ethical adaptor which draws upon the moral emotions to allow a system to constructively and proactively modify its behavior based on the consequences of its actions; 3) the development of models of robotic trust in humans and its dual, deception, drawing on psychological models of interdependence theory; and 4) concluding with an approach towards the maintenance of dignity in human-robot relationships.",
"title": ""
},
{
"docid": "neg:1840202_3",
"text": "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.",
"title": ""
},
{
"docid": "neg:1840202_4",
"text": "In this paper we present the features of a Question/Answering (Q/A) system that had unparalleled performance in the TREC-9 evaluations. We explain the accuracy of our system through the unique characteristics of its architecture: (1) usage of a wide-coverage answer type taxonomy; (2) repeated passage retrieval; (3) lexico-semantic feedback loops; (4) extraction of the answers based on machine learning techniques; and (5) answer caching. Experimental results show the effects of each feature on the overall performance of the Q/A system and lead to general conclusions about Q/A from large text collections.",
"title": ""
},
{
"docid": "neg:1840202_5",
"text": "In this paper, we introduce linear and nonlinear consensus protocols for networks of dynamic agents that allow the agents to agree in a distributed and cooperative fashion. We consider the cases of networks with communication time-delays and channels that have filtering effects. We find a tight upper bound on the maximum fixed time-delay that can be tolerated in the network. It turns out that the connectivity of the network is the key in reaching a consensus. The case of agreement with bounded inputs is considered by analyzing the convergence of a class of nonlinear protocols. A Lyapunov function is introduced that quantifies the total disagreement among the nodes of a network. Simulation results are provided for agreement in networks with communication time-delays and constrained inputs.",
"title": ""
},
{
"docid": "neg:1840202_6",
"text": "Dynamically typed languages trade flexibility and ease of use for safety, while statically typed languages prioritize the early detection of bugs, and provide a better framework for structure large programs. The idea of optional typing is to combine the two approaches in the same language: the programmer can begin development with dynamic types, and migrate to static types as the program matures. The challenge is designing a type system that feels natural to the programmer that is used to programming in a dynamic language.\n This paper presents the initial design of Typed Lua, an optionally-typed extension of the Lua scripting language. Lua is an imperative scripting language with first class functions and lightweight metaprogramming mechanisms. The design of Typed Lua's type system has a novel combination of features that preserves some of the idioms that Lua programmers are used to, while bringing static type safety to them. We show how the major features of the type system type these idioms with some examples, and discuss some of the design issues we faced.",
"title": ""
},
{
"docid": "neg:1840202_7",
"text": "Prediction of natural disasters and their consequences is difficult due to the uncertainties and complexity of multiple related factors. This article explores the use of domain knowledge and spatial data to construct a Bayesian network (BN) that facilitates the integration of multiple factors and quantification of uncertainties within a consistent system for assessment of catastrophic risk. A BN is chosen due to its advantages such as merging multiple source data and domain knowledge in a consistent system, learning from the data set, inference with missing data, and support of decision making. A key advantage of our methodology is the combination of domain knowledge and learning from the data to construct a robust network. To improve the assessment, we employ spatial data analysis and data mining to extend the training data set, select risk factors, and fine-tune the network. Another major advantage of our methodology is the integration of an optimal discretizer, informative feature selector, learners, search strategies for local topologies, and Bayesian model averaging. These techniques all contribute to a robust prediction of risk probability of natural disasters. In the flood disaster's study, our methodology achieved a better probability of detection of high risk, a better precision, and a better ROC area compared with other methods, using both cross-validation and prediction of catastrophic risk based on historic data. Our results suggest that BN is a good alternative for risk assessment and as a decision tool in the management of catastrophic risk.",
"title": ""
},
{
"docid": "neg:1840202_8",
"text": "The major challenge in designing wireless sensor networks (WSNs) is the support of the functional, such as data latency, and the non-functional, such as data integrity, requirements while coping with the computation, energy and communication constraints. Careful node placement can be a very effective optimization means for achieving the desired design goals. In this paper, we report on the current state of the research on optimized node placement in WSNs. We highlight the issues, identify the various objectives and enumerate the different models and formulations. We categorize the placement strategies into static and dynamic depending on whether the optimization is performed at the time of deployment or while the network is operational, respectively. We further classify the published techniques based on the role that the node plays in the network and the primary performance objective considered. The paper also highlights open problems in this area of research.",
"title": ""
},
{
"docid": "neg:1840202_9",
"text": "Project-based cross-sector partnerships to address social issues (CSSPs) occur in four “arenas”: business-nonprofit, business-government, government-nonprofit, and trisector. Research on CSSPs is multidisciplinary, and different conceptual “platforms” are used: resource dependence, social issues, and societal sector platforms. This article consolidates recent literature on CSSPs to improve the potential for cross-disciplinary fertilization and especially to highlight developments in various disciplines for organizational researchers. A number of possible directions for future research on the theory, process, practice, method, and critique of CSSPs are highlighted. The societal sector platform is identified as a particularly promising framework for future research.",
"title": ""
},
{
"docid": "neg:1840202_10",
"text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.",
"title": ""
},
{
"docid": "neg:1840202_11",
"text": "This paper describes a strategy to feature point correspondence and motion recovery in vehicle navigation. A transformation of the image plane is proposed that keeps the motion of the vehicle on a plane parallel to the transformed image plane. This permits to de\"ne linear tracking \"lters to estimate the real-world positions of the features, and allows us to select the matches that accomplish the rigidity of the scene by a Hough transform. Candidate correspondences are selected by similarity, taking into account the smoothness of motion. Further processing brings out the \"nal matching. The methods have been tested in a real application. ( 1999 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840202_12",
"text": "A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.",
"title": ""
},
{
"docid": "neg:1840202_13",
"text": "Global energy consumption is projected to increase, even in the face of substantial declines in energy intensity, at least 2-fold by midcentury relative to the present because of population and economic growth. This demand could be met, in principle, from fossil energy resources, particularly coal. However, the cumulative nature of CO(2) emissions in the atmosphere demands that holding atmospheric CO(2) levels to even twice their preanthropogenic values by midcentury will require invention, development, and deployment of schemes for carbon-neutral energy production on a scale commensurate with, or larger than, the entire present-day energy supply from all sources combined. Among renewable energy resources, solar energy is by far the largest exploitable resource, providing more energy in 1 hour to the earth than all of the energy consumed by humans in an entire year. In view of the intermittency of insolation, if solar energy is to be a major primary energy source, it must be stored and dispatched on demand to the end user. An especially attractive approach is to store solar-converted energy in the form of chemical bonds, i.e., in a photosynthetic process at a year-round average efficiency significantly higher than current plants or algae, to reduce land-area requirements. Scientific challenges involved with this process include schemes to capture and convert solar energy and then store the energy in the form of chemical bonds, producing oxygen from water and a reduced fuel such as hydrogen, methane, methanol, or other hydrocarbon species.",
"title": ""
},
{
"docid": "neg:1840202_14",
"text": "AVR XMEGA is the recent general-purpose 8-bit microcontroller from Atmel featuring symmetric crypto engines. We analyze the resistance of XMEGA crypto engines to side channel attacks. We reveal the relatively strong side channel leakage of the AES engine that enables full 128-bit AES secret key recovery in a matter of several minutes with a measurement setup cost about 1000 USD. 3000 power consumption traces are sufficient for the successful attack. Our analysis was performed without knowing the details of the crypto engine internals; quite the contrary, it reveals some details about the implementation. We sketch other feasible side channel attacks on XMEGA and suggest the counter-measures that can raise the complexity of the attacks but not fully prevent them.",
"title": ""
},
{
"docid": "neg:1840202_15",
"text": "This letter presents a 60-GHz 2 × 2 low temperature co-fired ceramic (LTCC) aperture-coupled patch antenna array with an integrated Sievenpiper electromagnetic band-gap (EBG) structure used to suppress TM-mode surface waves. The merit of this EBG structure is to yield a predicted 4-dB enhancement in broadside directivity and gain, and an 8-dB improvement in sidelobe level. The novelty of this antenna lies in the combination of a relatively new LTCC material system (DuPont Greentape 9K7) along with laser ablation processing for fine line and fine slot definition (50-μm gaps with +/ - 6 μm tolerance) allowing the first successful integration of a Sievenpiper EBG structure with a millimeter-wave LTCC patch array. A measured broadside gain/directivity of 11.5/14 dBi at 60 GHz is achieved with an aperture footprint of only 350 × 410 mil2 (1.78λ × 2.08λ) including the EBG structure. This thin (27 mil) LTCC array is well suited for chip-scale package applications.",
"title": ""
},
{
"docid": "neg:1840202_16",
"text": "Current pervasive games are mostly location-aware applications, played on handheld computing devices. Considering pervasive games for children, it is argued that the interaction paradigm existing games support limits essential aspects of outdoor play like spontaneous social interaction, physical movement, and rich face-to-face communication. We present a new genre of pervasive games conceived to address this problem, that we call “Head Up Games” (HUGs) to underline that they liberate players from facing down to attend to screen-based interactions. The article discusses characteristics of HUG and relates them to existing genres of pervasive games. We present lessons learned during the design and evaluation of three HUG and chart future challenges.",
"title": ""
},
{
"docid": "neg:1840202_17",
"text": "This paper presents a novel method for sensorless brushless dc (BLDC) motor drives. Based on the characteristics of the back electromotive force (EMF), the rotor position signals would be constructed. It is intended to construct these signals, which make the phase difference between the constructed signals and the back EMFs controllable. Then, the rotor-position error can be compensated by controlling the phase-difference angle in real time. In this paper, the rotor-position-detection error is analyzed. Using the TMS320F2812 chip as a control core, some experiments have been carried out on the prototype, which is the surface-mounted permanent magnet BLDC motor, and the experimental results verify the analysis and demonstrate advantages of the proposed sensorless-control method.",
"title": ""
},
{
"docid": "neg:1840202_18",
"text": "Virtual immersive environments or telepresence setups often consist of multiple cameras that have to be calibrated. We present a convenient method for doing this. The minimum is three cameras, but there is no upper limit. The method is fully automatic and a freely moving bright spot is the only calibration object. A set of virtual 3D points is made by waving the bright spot through the working volume. Its projections are found with subpixel precision and verified by a robust RANSAC analysis. The cameras do not have to see all points; only reasonable overlap between camera subgroups is necessary. Projective structures are computed via rank-4 factorization and the Euclidean stratification is done by imposing geometric constraints. This linear estimate initializes a postprocessing computation of nonlinear distortion, which is also fully automatic. We suggest a trick on how to use a very ordinary laser pointer as the calibration object. We show that it is possible to calibrate an immersive virtual environment with 16 cameras in less than 60 minutes reaching about 1/5 pixel reprojection error. The method has been successfully tested on numerous multicamera environments using varying numbers of cameras of varying quality.",
"title": ""
},
{
"docid": "neg:1840202_19",
"text": "Students’ increasing use of text messaging language has prompted concern that textisms (e.g., 2 for to, dont for don’t, ☺) will intrude into their formal written work. Eighty-six Australian and 150 Canadian undergraduates were asked to rate the appropriateness of textism use in various situations. Students distinguished between the appropriateness of using textisms in different writing modalities and to different recipients, rating textism use as inappropriate in formal exams and assignments, but appropriate in text messages, online chat and emails with friends and siblings. In a second study, we checked the examination papers of a separate sample of 153 Australian undergraduates for the presence of textisms. Only a negligible number were found. We conclude that, overall, university students recognise the different requirements of different recipients and modalities when considering textism use and that students are able to avoid textism use in exams despite media reports to the contrary.",
"title": ""
}
] |
1840203 | Training a text classifier with a single word using Twitter Lists and domain adaptation | [
{
"docid": "pos:1840203_0",
"text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:",
"title": ""
},
{
"docid": "pos:1840203_1",
"text": "On-line learning in domains where the target concept depends on some hidden context poses serious problems. A changing context can induce changes in the target concepts, producing what is known as concept drift. We describe a family of learning algorithms that flexibly react to concept drift and can take advantage of situations where contexts reappear. The general approach underlying all these algorithms consists of (1) keeping only a window of currently trusted examples and hypotheses; (2) storing concept descriptions and re-using them when a previous context re-appears; and (3) controlling both of these functions by a heuristic that constantly monitors the system's behavior. The paper reports on experiments that test the systems' performance under various conditions such as different levels of noise and different extent and rate of concept drift.",
"title": ""
},
{
"docid": "pos:1840203_2",
"text": "Hidden properties of social media users, such as their ethnicity, gender, and location, are often reflected in their observed attributes, such as their first and last names. Furthermore, users who communicate with each other often have similar hidden properties. We propose an algorithm that exploits these insights to cluster the observed attributes of hundreds of millions of Twitter users. Attributes such as user names are grouped together if users with those names communicate with other similar users. We separately cluster millions of unique first names, last names, and userprovided locations. The efficacy of these clusters is then evaluated on a diverse set of classification tasks that predict hidden users properties such as ethnicity, geographic location, gender, language, and race, using only profile names and locations when appropriate. Our readily-replicable approach and publiclyreleased clusters are shown to be remarkably effective and versatile, substantially outperforming state-of-the-art approaches and human accuracy on each of the tasks studied.",
"title": ""
}
] | [
{
"docid": "neg:1840203_0",
"text": "This paper proposed a novel controlling technique of pulse width modulation (PWM) mode and pulse frequency modulation (PFM) mode to keep the high efficiency within width range of loading. The novel control method is using PWM and PFM detector to achieve two modes switching appropriately. The controlling technique can make the efficiency of current mode DC-DC buck converter up to 88% at light loading and this paper is implemented by TSMC 0.35 mum CMOS process.",
"title": ""
},
{
"docid": "neg:1840203_1",
"text": "Text summarization solves the problem of extracting important information from huge amount of text data. There are various methods in the literature that aim to find out well-formed summaries. One of the most commonly used methods is the Latent Semantic Analysis (LSA). In this paper, different LSA based summarization algorithms are explained and two new LSA based summarization algorithms are proposed. The algorithms are evaluated on Turkish documents, and their performances are compared using their ROUGE-L scores. One of our algorithms produces the best scores.",
"title": ""
},
{
"docid": "neg:1840203_2",
"text": "An F-RAN is presented in this article as a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. The core idea is to take full advantage of local radio signal processing, cooperative radio resource management, and distributed storing capabilities in edge devices, which can decrease the heavy burden on fronthaul and avoid large-scale radio signal processing in the centralized baseband unit pool. This article comprehensively presents the system architecture and key techniques of F-RANs. In particular, key techniques and their corresponding solutions, including transmission mode selection and interference suppression, are discussed. Open issues in terms of edge caching, software-defined networking, and network function virtualization are also identified.",
"title": ""
},
{
"docid": "neg:1840203_3",
"text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent",
"title": ""
},
{
"docid": "neg:1840203_4",
"text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.",
"title": ""
},
{
"docid": "neg:1840203_5",
"text": "PURPOSE\nThe authors conducted a systematic review of the published literature on social media use in medical education to answer two questions: (1) How have interventions using social media tools affected outcomes of satisfaction, knowledge, attitudes, and skills for physicians and physicians-in-training? and (2) What challenges and opportunities specific to social media have educators encountered in implementing these interventions?\n\n\nMETHOD\nThe authors searched the MEDLINE, CINAHL, ERIC, Embase, PsycINFO, ProQuest, Cochrane Library, Web of Science, and Scopus databases (from the start of each through September 12, 2011) using keywords related to social media and medical education. Two authors independently reviewed the search results to select peer-reviewed, English-language articles discussing social media use in educational interventions at any level of physician training. They assessed study quality using the Medical Education Research Study Quality Instrument.\n\n\nRESULTS\nFourteen studies met inclusion criteria. Interventions using social media tools were associated with improved knowledge (e.g., exam scores), attitudes (e.g., empathy), and skills (e.g., reflective writing). The most commonly reported opportunities related to incorporating social media tools were promoting learner engagement (71% of studies), feedback (57%), and collaboration and professional development (both 36%). The most commonly cited challenges were technical issues (43%), variable learner participation (43%), and privacy/security concerns (29%). Studies were generally of low to moderate quality; there was only one randomized controlled trial.\n\n\nCONCLUSIONS\nSocial media use in medical education is an emerging field of scholarship that merits further investigation. Educators face challenges in adapting new technologies, but they also have opportunities for innovation.",
"title": ""
},
{
"docid": "neg:1840203_6",
"text": "In an attempt to preserve the structural information in malware binaries during feature extraction, function call graph-based features have been used in various research works in malware classification. However, the approach usually employed when performing classification on these graphs, is based on computing graph similarity using computationally intensive techniques. Due to this, much of the previous work in this area incurred large performance overhead and does not scale well. In this paper, we propose a linear time function call graph (FCG) vector representation based on function clustering that has significant performance gains in addition to improved classification accuracy. We also show how this representation can enable using graph features together with other non-graph features.",
"title": ""
},
{
"docid": "neg:1840203_7",
"text": "Massive classification, a classification task defined over a vast number of classes (hundreds of thousands or even millions), has become an essential part of many real-world systems, such as face recognition. Existing methods, including the deep networks that achieved remarkable success in recent years, were mostly devised for problems with a moderate number of classes. They would meet with substantial difficulties, e.g. excessive memory demand and computational cost, when applied to massive problems. We present a new method to tackle this problem. This method can efficiently and accurately identify a small number of “active classes” for each mini-batch, based on a set of dynamic class hierarchies constructed on the fly. We also develop an adaptive allocation scheme thereon, which leads to a better tradeoff between performance and cost. On several large-scale benchmarks, our method significantly reduces the training cost and memory demand, while maintaining competitive performance.",
"title": ""
},
{
"docid": "neg:1840203_8",
"text": "Purchasing decisions in many product categories are heavily influenced by the shopper's aesthetic preferences. It's insufficient to simply match a shopper with popular items from the category in question; a successful shopping experience also identifies products that match those aesthetics. The challenge of capturing shoppers' styles becomes more difficult as the size and diversity of the marketplace increases. At Etsy, an online marketplace for handmade and vintage goods with over 30 million diverse listings, the problem of capturing taste is particularly important -- users come to the site specifically to find items that match their eclectic styles.\n In this paper, we describe our methods and experiments for deploying two new style-based recommender systems on the Etsy site. We use Latent Dirichlet Allocation (LDA) to discover trending categories and styles on Etsy, which are then used to describe a user's \"interest\" profile. We also explore hashing methods to perform fast nearest neighbor search on a map-reduce framework, in order to efficiently obtain recommendations. These techniques have been implemented successfully at very large scale, substantially improving many key business metrics.",
"title": ""
},
{
"docid": "neg:1840203_9",
"text": "We present regular linear temporal logic (RLTL), a logic that generalizes linear temporal logic with the ability to use regular expressions arbitrarily as sub-expressions. Every LTL operator can be defined as a context in regular linear temporal logic. This implies that there is a (linear) translation from LTL to RLTL. Unlike LTL, regular linear temporal logic can define all ω-regular languages, while still keeping the satisfiability problem in PSPACE. Unlike the extended temporal logics ETL∗, RLTL is defined with an algebraic signature. In contrast to the linear time μ-calculus, RLTL does not depend on fix-points in its syntax.",
"title": ""
},
{
"docid": "neg:1840203_10",
"text": "Gold nanoparticles are widely used in biomedical imaging and diagnostic tests. Based on their established use in the laboratory and the chemical stability of Au(0), gold nanoparticles were expected to be safe. The recent literature, however, contains conflicting data regarding the cytotoxicity of gold nanoparticles. Against this background a systematic study of water-soluble gold nanoparticles stabilized by triphenylphosphine derivatives ranging in size from 0.8 to 15 nm is made. The cytotoxicity of these particles in four cell lines representing major functional cell types with barrier and phagocyte function are tested. Connective tissue fibroblasts, epithelial cells, macrophages, and melanoma cells prove most sensitive to gold particles 1.4 nm in size, which results in IC(50) values ranging from 30 to 56 microM depending on the particular 1.4-nm Au compound-cell line combination. In contrast, gold particles 15 nm in size and Tauredon (gold thiomalate) are nontoxic at up to 60-fold and 100-fold higher concentrations, respectively. The cellular response is size dependent, in that 1.4-nm particles cause predominantly rapid cell death by necrosis within 12 h while closely related particles 1.2 nm in diameter effect predominantly programmed cell death by apoptosis.",
"title": ""
},
{
"docid": "neg:1840203_11",
"text": "The measurement of different target parameters using radar systems has been an active research area for the last decades. Particularly target angle measurement is a very demanding topic, because obtaining good measurement results often goes hand in hand with extensive hardware effort. Especially for sensors used in the mass market, e.g. in automotive applications like adaptive cruise control this may be prohibitive. Therefore we address target localization using a compact frequency-modulated continuous-wave (FMCW) radar sensor. The angular measurement results are improved compared to standard beamforming methods using an adaptive beamforming approach. This approach will be applied to the FMCW principle in a way that allows the use of well known methods for the determination of other target parameters like range or velocity. The applicability of the developed theory will be shown on different measurement scenarios using a 24-GHz prototype radar system.",
"title": ""
},
{
"docid": "neg:1840203_12",
"text": "Deep neural networks have been widely used in numerous computer vision applications, particularly in face recognition. However, deploying deep neural network face recognition on mobile devices is still limited since most highaccuracy deep models are both time and GPU consumption in the inference stage. Therefore, developing a lightweight deep neural network is one of the most promising solutions to deploy face recognition on mobile devices. Such the lightweight deep neural network requires efficient memory with small number of weights representation and low cost operators. In this paper a novel deep neural network named MobiFace, which is simple but effective, is proposed for productively deploying face recognition on mobile devices. The experimental results have shown that our lightweight MobiFace is able to achieve high performance with 99.7% on LFW database and 91.3% on large-scale challenging Megaface database. It is also eventually competitive against large-scale deep-networks face recognition while significant reducing computational time and memory consumption.",
"title": ""
},
{
"docid": "neg:1840203_13",
"text": "This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter m. We further derive specific m to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge 1 show the superiority of A-Softmax loss in FR tasks.",
"title": ""
},
{
"docid": "neg:1840203_14",
"text": "Event Detection (ED) aims to identify instances of specified types of events in text, which is a crucial component in the overall task of event extraction. The commonly used features consist of lexical, syntactic, and entity information, but the knowledge encoded in the Abstract Meaning Representation (AMR) has not been utilized in this task. AMR is a semantic formalism in which the meaning of a sentence is encoded as a rooted, directed, acyclic graph. In this paper, we demonstrate the effectiveness of AMR to capture and represent the deeper semantic contexts of the trigger words in this task. Experimental results further show that adding AMR features on top of the traditional features can achieve 67.8% (with 2.1% absolute improvement) F-measure (F1), which is comparable to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840203_15",
"text": "EASL–EORTC Clinical Practice Guidelines (CPG) on the management of hepatocellular carcinoma (HCC) define the use of surveillance, diagnosis, and therapeutic strategies recommended for patients with this type of cancer. This is the first European joint effort by the European Association for the Study of the Liver (EASL) and the European Organization for Research and Treatment of Cancer (EORTC) to provide common guidelines for the management of hepatocellular carcinoma. These guidelines update the recommendations reported by the EASL panel of experts in HCC published in 2001 [1]. Several clinical and scientific advances have occurred during the past decade and, thus, a modern version of the document is urgently needed. The purpose of this document is to assist physicians, patients, health-care providers, and health-policy makers from Europe and worldwide in the decision-making process according to evidencebased data. Users of these guidelines should be aware that the recommendations are intended to guide clinical practice in circumstances where all possible resources and therapies are available. Thus, they should adapt the recommendations to their local regulations and/or team capacities, infrastructure, and cost– benefit strategies. Finally, this document sets out some recommendations that should be instrumental in advancing the research and knowledge of this disease and ultimately contribute to improve patient care. The EASL–EORTC CPG on the management of hepatocellular carcinoma provide recommendations based on the level of evi-",
"title": ""
},
{
"docid": "neg:1840203_16",
"text": "Individual decision-making forms the basis for nearly all of microeconomic analysis. These notes outline the standard economic model of rational choice in decisionmaking. In the standard view, rational choice is defined to mean the process of determining what options are available and then choosing the most preferred one according to some consistent criterion. In a certain sense, this rational choice model is already an optimization-based approach. We will find that by adding one empirically unrestrictive assumption, the problem of rational choice can be represented as one of maximizing a real-valued utility function. The utility maximization approach grew out of a remarkable intellectual convergence that began during the 19th century. On one hand, utilitarian philosophers were seeking an objective criterion for a science of government. If policies were to be decided based on attaining the “greatest good for the greatest number,” they would need to find a utility index that could measure of how beneficial different policies were to different people. On the other hand, thinkers following Adam Smith were trying to refine his ideas about how an economic system based on individual self-interest would work. Perhaps that project, too, could be advanced by developing an index of self-interest, assessing how beneficial various outcomes ∗These notes are an evolving, collaborative product. The first version was by Antonio Rangel in Fall 2000. Those original notes were edited and expanded by Jon Levin in Fall 2001 and 2004, and by Paul Milgrom in Fall 2002 and 2003.",
"title": ""
},
{
"docid": "neg:1840203_17",
"text": "The adaptive fuzzy and fuzzy neural models are being widely used for identification of dynamic systems. This paper describes different fuzzy logic and neural fuzzy models. The robustness of models has further been checked by Simulink implementation of the models with application to the problem of system identification. The approach is to identify the system by minimizing the cost function using parameters update.",
"title": ""
},
{
"docid": "neg:1840203_18",
"text": "Neutrosophic numbers easily allow modeling uncertainties of prices universe, thus justifying the growing interest for theoretical and practical aspects of arithmetic generated by some special numbers in our work. At the beginning of this paper, we reconsider the importance in applied research of instrumental discernment, viewed as the main support of the final measurement validity. Theoretically, the need for discernment is revealed by decision logic, and more recently by the new neutrosophic logic and by constructing neutrosophic-type index numbers, exemplified in the context and applied to the world of prices, and, from a practical standpoint, by the possibility to use index numbers in characterization of some cyclical phenomena and economic processes, e.g. inflation rate. The neutrosophic index numbers or neutrosophic indexes are the key topic of this article. The next step is an interrogative and applicative one, drawing the coordinates of an optimized discernment centered on neutrosophic-type index numbers. The inevitable conclusions are optimistic in relation to the common future of the index method and neutrosophic logic, with statistical and economic meaning and utility.",
"title": ""
},
{
"docid": "neg:1840203_19",
"text": "The ability to assess the reputation of a member in a web community is a need addressed in many different ways according to the many different stages in which the nature of communities has evolved over time. In the case of reputation of goods/services suppliers, the solutions available to prevent the feedback abuse are generally reliable but centralized under the control of few big Internet companies. In this paper we show how a decentralized and distributed feedback management system can be built on top of the Bitcoin blockchain.",
"title": ""
}
] |
1840204 | Character-Aware Neural Networks for Arabic Named Entity Recognition for Social Media | [
{
"docid": "pos:1840204_0",
"text": "The existing machine translation systems, whether phrase-based or neural, have relied almost exclusively on word-level modelling with explicit segmentation. In this paper, we ask a fundamental question: can neural machine translation generate a character sequence without any explicit segmentation? To answer this question, we evaluate an attention-based encoder– decoder with a subword-level encoder and a character-level decoder on four language pairs–En-Cs, En-De, En-Ru and En-Fi– using the parallel corpora from WMT’15. Our experiments show that the models with a character-level decoder outperform the ones with a subword-level decoder on all of the four language pairs. Furthermore, the ensembles of neural models with a character-level decoder outperform the state-of-the-art non-neural machine translation systems on En-Cs, En-De and En-Fi and perform comparably on En-Ru.",
"title": ""
},
{
"docid": "pos:1840204_1",
"text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.",
"title": ""
},
{
"docid": "pos:1840204_2",
"text": "We present a language agnostic, unsupervised method for inducing morphological transformations between words. The method relies on certain regularities manifest in highdimensional vector spaces. We show that this method is capable of discovering a wide range of morphological rules, which in turn are used to build morphological analyzers. We evaluate this method across six different languages and nine datasets, and show significant improvements across all languages.",
"title": ""
},
{
"docid": "pos:1840204_3",
"text": "Named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineering and lexicons to achieve high performance. In this paper, we present a novel neural network architecture that automatically detects word- and character-level features using a hybrid bidirectional LSTM and CNN architecture, eliminating the need for most feature engineering. We also propose a novel method of encoding partial lexicon matches in neural networks and compare it to existing approaches. Extensive evaluation shows that, given only tokenized text and publicly available word embeddings, our system is competitive on the CoNLL-2003 dataset and surpasses the previously reported state of the art performance on the OntoNotes 5.0 dataset by 2.13 F1 points. By using two lexicons constructed from publicly-available sources, we establish new state of the art performance with an F1 score of 91.62 on CoNLL-2003 and 86.28 on OntoNotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information.",
"title": ""
}
] | [
{
"docid": "neg:1840204_0",
"text": "In this paper, we propose a synthetic generationmethod for time-series data based on generative adversarial networks (GANs) and apply it to data augmentation for biosinal classification. GANs are a recently proposed framework for learning a generative model, where two neural networks, one generating synthetic data and the other discriminating synthetic and real data, are trained while competing with each other. In the proposed method, each neural network in GANs is developed based on a recurrent neural network using long short-term memories, thereby allowing the adaptation of the GANs framework to time-series data generation. In the experiments, we confirmed the capability of the proposed method for generating synthetic biosignals using the electrocardiogram and electroencephalogram datasets. We also showed the effectiveness of the proposed method for data augmentation in the biosignal classification problem.",
"title": ""
},
{
"docid": "neg:1840204_1",
"text": "The paper attempts to provide forecast methodological framework and concrete models to estimate long run probability of default term structure for Hungarian corporate debt instruments, in line with IFRS 9 requirements. Long run probability of default and expected loss can be estimated by various methods and has fifty-five years of history in literature. After studying literature and empirical models, the Markov chain approach was selected to accomplish lifetime probability of default modeling for Hungarian corporate debt instruments. Empirical results reveal that both discrete and continuous homogeneous Markov chain models systematically overestimate the long term corporate probability of default. However, the continuous nonhomogeneous Markov chain gives both intuitively and empirically appropriate probability of default trajectories. The estimated term structure mathematically and professionally properly expresses the probability of default element of expected loss that can realistically occur in the long-run in Hungarian corporate lending. The elaborated models can be easily implemented at Hungarian corporate financial institutions.",
"title": ""
},
{
"docid": "neg:1840204_2",
"text": "Authorship verification can be checked using stylometric techniques through the analysis of linguistic styles and writing characteristics of the authors. Stylometry is a behavioral feature that a person exhibits during writing and can be extracted and used potentially to check the identity of the author of online documents. Although stylometric techniques can achieve high accuracy rates for long documents, it is still challenging to identify an author for short documents, in particular when dealing with large authors populations. These hurdles must be addressed for stylometry to be usable in checking authorship of online messages such as emails, text messages, or twitter feeds. In this paper, we pose some steps toward achieving that goal by proposing a supervised learning technique combined with n-gram analysis for authorship verification in short texts. Experimental evaluation based on the Enron email dataset involving 87 authors yields very promising results consisting of an Equal Error Rate (EER) of 14.35% for message blocks of 500 characters.",
"title": ""
},
{
"docid": "neg:1840204_3",
"text": "Graphical models, as applied to multi-target prediction problems, commonly utilize interaction terms to impose structure among the output variables. Often, such structure is based on the assumption that related outputs need to be similar and interaction terms that force them to be closer are adopted. Here we relax that assumption and propose a feature that is based on distance and can adapt to ensure that variables have smaller or larger difference in values. We utilized a Gaussian Conditional Random Field model, where we have extended its originally proposed interaction potential to include a distance term. The extended model is compared to the baseline in various structured regression setups. An increase in predictive accuracy was observed on both synthetic examples and real-world applications, including challenging tasks from climate and healthcare domains.",
"title": ""
},
{
"docid": "neg:1840204_4",
"text": "Big Data and Cloud computing are the most important technologies that give the opportunity for government agencies to gain a competitive advantage and improve their organizations. On one hand, Big Data implementation requires investing a significant amount of money in hardware, software, and workforce. On the other hand, Cloud Computing offers an unlimited, scalable and on-demand pool of resources which provide the ability to adopt Big Data technology without wasting on the financial resources of the organization and make the implementation of Big Data faster and easier. The aim of this study is to conduct a systematic literature review in order to collect data to identify the benefits and challenges of Big Data on Cloud for government agencies and to make a clear understanding of how combining Big Data and Cloud Computing help to overcome some of these challenges. The last objective of this study is to identify the solutions for related challenges of Big Data. Four research questions were designed to determine the information that is related to the objectives of this study. Data is collected using literature review method and the results are deduced from there.",
"title": ""
},
{
"docid": "neg:1840204_5",
"text": "Technology roadmapping is becoming an increasingly important and widespread approach for aligning technology with organizational goals. The popularity of roadmapping is due mainly to the communication and networking benefits that arise from the development and dissemination of roadmaps, particularly in terms of building common understanding across internal and external organizational boundaries. From its origins in Motorola and Corning more than 25 years ago, where it was used to link product and technology plans, the approach has been adapted for many different purposes in a wide variety of sectors and at all levels, from small enterprises to national foresight programs. Building on previous papers presented at PICMET, concerning the rapid initiation of the technique, and how to customize the approach, this paper highlights the evolution and continuing growth of the method and its application to general strategic planning. The issues associated with extending the roadmapping method to form a central element of an integrated strategic planning process are considered.",
"title": ""
},
{
"docid": "neg:1840204_6",
"text": "We describe our experience with collecting roughly 250, 000 image annotations on Amazon Mechanical Turk (AMT). The annotations we collected range from location of keypoints and figure ground masks of various object categories, 3D pose estimates of head and torsos of people in images and attributes like gender, race, type of hair, etc. We describe the setup and strategies we adopted to automatically approve and reject the annotations, which becomes important for large scale annotations. These annotations were used to train algorithms for detection, segmentation, pose estimation, action recognition and attribute recognition of people in images.",
"title": ""
},
{
"docid": "neg:1840204_7",
"text": "We present a novel approach, called selectional branching, which uses confidence estimates to decide when to employ a beam, providing the accuracy of beam search at speeds close to a greedy transition-based dependency parsing approach. Selectional branching is guaranteed to perform a fewer number of transitions than beam search yet performs as accurately. We also present a new transition-based dependency parsing algorithm that gives a complexity of O(n) for projective parsing and an expected linear time speed for non-projective parsing. With the standard setup, our parser shows an unlabeled attachment score of 92.96% and a parsing speed of 9 milliseconds per sentence, which is faster and more accurate than the current state-of-the-art transitionbased parser that uses beam search.",
"title": ""
},
{
"docid": "neg:1840204_8",
"text": "Grids are commonly used as histograms to process spatial data in order to detect frequent patterns, predict destinations, or to infer popular places. However, they have not been previously used for GPS trajectory similarity searches or retrieval in general. Instead, slower and more complicated algorithms based on individual point-pair comparison have been used. We demonstrate how a grid representation can be used to compute four different route measures: novelty, noteworthiness, similarity, and inclusion. The measures may be used in several applications such as identifying taxi fraud, automatically updating GPS navigation software, optimizing traffic, and identifying commuting patterns. We compare our proposed route similarity measure, C-SIM, to eight popular alternatives including Edit Distance on Real sequence (EDR) and Frechet distance. The proposed measure is simple to implement and we give a fast, linear time algorithm for the task. It works well under noise, changes in sampling rate, and point shifting. We demonstrate that by using the grid, a route similarity ranking can be computed in real-time on the Mopsi20141 route dataset, which consists of over 6,000 routes. This ranking is an extension of the most similar route search and contains an ordered list of all similar routes from the database. The real-time search is due to indexing the cell database and comes at the cost of spending 80% more memory space for the index. The methods are implemented inside the Mopsi2 route module.",
"title": ""
},
{
"docid": "neg:1840204_9",
"text": "Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure object-oriented languages, but the degree to which these results are transferable to applications written in hybrid languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for object-oriented languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the \"object-orientedness\" of a program.",
"title": ""
},
{
"docid": "neg:1840204_10",
"text": "Traffic light control systems are widely used to monitor and control the flow of automobiles through the junction of many roads. They aim to realize smooth motion of cars in the transportation routes. However, the synchronization of multiple traffic light systems at adjacent intersections is a complicated problem given the various parameters involved. Conventional systems do not handle variable flows approaching the junctions. In addition, the mutual interference between adjacent traffic light systems, the disparity of cars flow with time, the accidents, the passage of emergency vehicles, and the pedestrian crossing are not implemented in the existing traffic system. This leads to traffic jam and congestion. We propose a system based on PIC microcontroller that evaluates the traffic density using IR sensors and accomplishes dynamic timing slots with different levels. Moreover, a portable controller device is designed to solve the problem of emergency vehicles stuck in the overcrowded roads.",
"title": ""
},
{
"docid": "neg:1840204_11",
"text": "Augmented reality (AR) is a technology in which a user's view of the real world is enhanced or augmented with additional information generated from a computer model. To have a working AR system, the see-through display system must be calibrated so that the graphics are properly rendered. The optical see-through systems present an additional challenge because, unlike the video see-through systems, we do not have direct access to the image data to be used in various calibration procedures. This paper reports on a calibration method we developed for optical see-through headmounted displays. We first introduce a method for calibrating monocular optical seethrough displays (that is, a display for one eye only) and then extend it to stereo optical see-through displays in which the displays for both eyes are calibrated in a single procedure. The method integrates the measurements for the camera and a six-degrees-offreedom tracker that is attached to the camera to do the calibration. We have used both an off-the-shelf magnetic tracker as well as a vision-based infrared tracker we have built. In the monocular case, the calibration is based on the alignment of image points with a single 3D point in the world coordinate system from various viewpoints. In this method, the user interaction to perform the calibration is extremely easy compared to prior methods, and there is no requirement for keeping the head immobile while performing the calibration. In the stereo calibration case, the user aligns a stereoscopically fused 2D marker, which is perceived in depth, with a single target point in the world whose coordinates are known. As in the monocular case, there is no requirement that the user keep his or her head fixed.",
"title": ""
},
{
"docid": "neg:1840204_12",
"text": "0957-4174/$ see front matter 2012 Elsevier Ltd. A http://dx.doi.org/10.1016/j.eswa.2012.05.028 ⇑ Corresponding author. E-mail address: dgil@dtic.ua.es (D. Gil). 1 These authors equally contributed to this work. Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors, as well as life habits, may affect semen quality. Artificial intelligence techniques are now an emerging methodology as decision support systems in medicine. In this paper we compare three artificial intelligence techniques, decision trees, Multilayer Perceptron and Support Vector Machines, in order to evaluate their performance in the prediction of the seminal quality from the data of the environmental factors and lifestyle. To do that we collect data by a normalized questionnaire from young healthy volunteers and then, we use the results of a semen analysis to asses the accuracy in the prediction of the three classification methods mentioned above. The results show that Multilayer Perceptron and Support Vector Machines show the highest accuracy, with prediction accuracy values of 86% for some of the seminal parameters. In contrast decision trees provide a visual and illustrative approach that can compensate the slightly lower accuracy obtained. In conclusion artificial intelligence methods are a useful tool in order to predict the seminal profile of an individual from the environmental factors and life habits. From the studied methods, Multilayer Perceptron and Support Vector Machines are the most accurate in the prediction. Therefore these tools, together with the visual help that decision trees offer, are the suggested methods to be included in the evaluation of the infertile patient. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840204_13",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
},
{
"docid": "neg:1840204_14",
"text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.",
"title": ""
},
{
"docid": "neg:1840204_15",
"text": "MIMO is a technology that utilizes multiple antennas at transmitter/receiver to improve the throughput, capacity and coverage of wireless system. Massive MIMO where Base Station is equipped with orders of magnitude more antennas have shown over 10 times spectral efficiency increase over MIMO with simpler signal processing algorithms. Massive MIMO has benefits of enhanced capacity, spectral and energy efficiency and it can be built by using low cost and low power components. Despite its potential benefits, this paper also summarizes some challenges faced by massive MIMO such as antenna spatial correlation and mutual coupling as well as non-linear hardware impairments. These challenges encountered in massive MIMO uncover new problems that need further investigation.",
"title": ""
},
{
"docid": "neg:1840204_16",
"text": "Ergative case, the special case of transitive subjects, rai ses questions not only for the theory of case but also for theories of subjectho od and transitivity. This paper analyzes the case system of Nez Perce, a ”three-way erg tiv ” language, with an eye towards a formalization of the category of transitive subject . I show that it is object agreement that is determinative of transitivity, an d hence of ergative case, in Nez Perce. I further show that the transitivity condition on ergative case must be coupled with a criterion of subjecthood that makes reference to participation in subject agreement, not just to origin in a high argument-structural position. These two results suggest a formalization of the transitive subject as that ar gument uniquely accessing both high and low agreement information, the former through its (agreement-derived) connection with T and the latter through its origin in the spe cifi r of a head associated with object agreement (v). In view of these findings, I ar gue that ergative case morphology should be analyzed not as the expression of a synt ctic primitive but as the morphological spell-out of subject agreement and objec t agreement on a nominal.",
"title": ""
},
{
"docid": "neg:1840204_17",
"text": "Skin-mountable chemical sensors using flexible chemically sensitive nanomaterials are of great interest for electronic skin (e-skin) application. To build these sensors, the emerging atomically thin two-dimensional (2D) layered semiconductors could be a good material candidate. Herein, we show that a large-area WS2 film synthesized by sulfurization of a tungsten film exhibits high humidity sensing performance both in natural flat and high mechanical flexible states (bending curvature down to 5 mm). The conductivity of as-synthesized WS2 increases sensitively over a wide relative humidity range (up to 90%) with fast response and recovery times in a few seconds. By using graphene as electrodes and thin polydimethylsiloxane (PDMS) as substrate, a transparent, flexible, and stretchable humidity sensor was fabricated. This senor can be well laminated onto skin and shows stable water moisture sensing behaviors in the undeformed relaxed state as well as under compressive and tensile loadings. Furthermore, its high sensing performance enables real-time monitoring of human breath, indicating a potential mask-free breath monitoring for healthcare application. We believe that such a skin-activity compatible WS2 humidity sensor may shed light on developing low power consumption wearable chemical sensors based on 2D semiconductors.",
"title": ""
},
{
"docid": "neg:1840204_18",
"text": "Epilepsy is a neurological disorder with prevalence of about 1-2% of the world’s population (Mormann, Andrzejak, Elger & Lehnertz, 2007). It is characterized by sudden recurrent and transient disturbances of perception or behaviour resulting from excessive synchronization of cortical neuronal networks; it is a neurological condition in which an individual experiences chronic abnormal bursts of electrical discharges in the brain. The hallmark of epilepsy is recurrent seizures termed \"epileptic seizures\". Epileptic seizures are divided by their clinical manifestation into partial or focal, generalized, unilateral and unclassified seizures (James, 1997; Tzallas, Tsipouras & Fotiadis, 2007a, 2009). Focal epileptic seizures involve only part of cerebral hemisphere and produce symptoms in corresponding parts of the body or in some related mental functions. Generalized epileptic seizures involve the entire brain and produce bilateral motor symptoms usually with loss of consciousness. Both types of epileptic seizures can occur at all ages. Generalized epileptic seizures can be subdivided into absence (petit mal) and tonic-clonic (grand mal) seizures (James, 1997).",
"title": ""
},
{
"docid": "neg:1840204_19",
"text": "Radio signal propagation modeling plays an important role in designing wireless communication systems. The propagation models are used to calculate the number and position of base stations and predict the radio coverage. Different models have been developed to predict radio propagation behavior for wireless communication systems in different operating environments. In this paper we shall limit our discussion to the latest achievements in radio propagation modeling related to tunnels. The main modeling approaches used for propagation in tunnels are reviewed, namely, numerical methods for solving Maxwell equations, waveguide or modal approach, ray tracing based methods and two-slope path loss modeling. They are discussed in terms of modeling complexity and required information on the environment including tunnel geometry and electric as well as magnetic properties of walls.",
"title": ""
}
] |
1840205 | Accelerating 5G QoE via public-private spectrum sharing | [
{
"docid": "pos:1840205_0",
"text": "In this paper the possibility of designing an OFDM system for simultaneous radar and communications operations is discussed. A novel approach to OFDM radar processing is introduced that overcomes the typical drawbacks of correlation based processing. A suitable OFDM system parameterization for operation at 24 GHz is derived that fulfills the requirements for both applications. The operability of the proposed system concept is verified with MatLab simulations. Keywords-OFDM; Radar, Communications",
"title": ""
},
{
"docid": "pos:1840205_1",
"text": "Orthogonal frequency-division multiplexing (OFDM) signal coding and system architecture were implemented to achieve radar and data communication functionalities. The resultant system is a software-defined unit, which can be used for range measurements, radar imaging, and data communications. Range reconstructions were performed for ranges up to 4 m using trihedral corner reflectors with approximately 203 m of radar cross section at the carrier frequency; range resolution of approximately 0.3 m was demonstrated. Synthetic aperture radar (SAR) image of a single corner reflector was obtained; SAR signal processing specific to OFDM signals is presented. Data communication tests were performed in radar setup, where the signal was reflected by the same target and decoded as communication data; bit error rate of was achieved at 57 Mb/s. The system shows good promise as a multifunctional software-defined sensor which can be used in radar sensor networks.",
"title": ""
}
] | [
{
"docid": "neg:1840205_0",
"text": "An extensive literature shows that social relationships influence psychological well-being, but the underlying mechanisms remain unclear. We test predictions about online interactions and well-being made by theories of belongingness, relationship maintenance, relational investment, social support, and social comparison. An opt-in panel study of 1,910 Facebook users linked self-reported measures of well-being to counts of respondents’ Facebook activities from server logs. Specific uses of the site were associated with improvements in well-being: Receiving targeted, composed communication from strong ties was associated with improvements in wellbeing while viewing friends’ wide-audience broadcasts and receiving one-click feedback were not. These results suggest that people derive benefits from online communication, as long it comes from people they care about and has been tailored for them.",
"title": ""
},
{
"docid": "neg:1840205_1",
"text": "BACKGROUND\nConcern over the frequency of unintended harm to patients has focused attention on the importance of teamwork and communication in avoiding errors. This has led to experiments with teamwork training programmes for clinical staff, mostly based on aviation models. These are widely assumed to be effective in improving patient safety, but the extent to which this assumption is justified by evidence remains unclear.\n\n\nMETHODS\nA systematic literature review on the effects of teamwork training for clinical staff was performed. Information was sought on outcomes including staff attitudes, teamwork skills, technical performance, efficiency and clinical outcomes.\n\n\nRESULTS\nOf 1036 relevant abstracts identified, 14 articles were analysed in detail: four randomized trials and ten non-randomized studies. Overall study quality was poor, with particular problems over blinding, subjective measures and Hawthorne effects. Few studies reported on every outcome category. Most reported improved staff attitudes, and six of eight reported significantly better teamwork after training. Five of eight studies reported improved technical performance, improved efficiency or reduced errors. Three studies reported evidence of clinical benefit, but this was modest or of borderline significance in each case. Studies with a stronger intervention were more likely to report benefits than those providing less training. None of the randomized trials found evidence of technical or clinical benefit.\n\n\nCONCLUSION\nThe evidence for technical or clinical benefit from teamwork training in medicine is weak. There is some evidence of benefit from studies with more intensive training programmes, but better quality research and cost-benefit analysis are needed.",
"title": ""
},
{
"docid": "neg:1840205_2",
"text": "We reinterpret multiplicative noise in neural networks as auxiliary random variables that augment the approximate posterior in a variational setting for Bayesian neural networks. We show that through this interpretation it is both efficient and straightforward to improve the approximation by employing normalizing flows (Rezende & Mohamed, 2015) while still allowing for local reparametrizations (Kingma et al., 2015) and a tractable lower bound (Ranganath et al., 2015; Maaløe et al., 2016). In experiments we show that with this new approximation we can significantly improve upon classical mean field for Bayesian neural networks on both predictive accuracy as well as predictive uncertainty.",
"title": ""
},
{
"docid": "neg:1840205_3",
"text": "BACKGROUND\nPancreatic stellate cells (PSCs), a major component of the tumor microenvironment in pancreatic cancer, play roles in cancer progression as well as drug resistance. Culturing various cells in microfluidic (microchannel) devices has proven to be a useful in studying cellular interactions and drug sensitivity. Here we present a microchannel plate-based co-culture model that integrates tumor spheroids with PSCs in a three-dimensional (3D) collagen matrix to mimic the tumor microenvironment in vivo by recapitulating epithelial-mesenchymal transition and chemoresistance.\n\n\nMETHODS\nA 7-channel microchannel plate was prepared using poly-dimethylsiloxane (PDMS) via soft lithography. PANC-1, a human pancreatic cancer cell line, and PSCs, each within a designated channel of the microchannel plate, were cultured embedded in type I collagen. Expression of EMT-related markers and factors was analyzed using immunofluorescent staining or Proteome analysis. Changes in viability following exposure to gemcitabine and paclitaxel were measured using Live/Dead assay.\n\n\nRESULTS\nPANC-1 cells formed 3D tumor spheroids within 5 days and the number of spheroids increased when co-cultured with PSCs. Culture conditions were optimized for PANC-1 cells and PSCs, and their appropriate interaction was confirmed by reciprocal activation shown as increased cell motility. PSCs under co-culture showed an increased expression of α-SMA. Expression of EMT-related markers, such as vimentin and TGF-β, was higher in co-cultured PANC-1 spheroids compared to that in mono-cultured spheroids; as was the expression of many other EMT-related factors including TIMP1 and IL-8. Following gemcitabine exposure, no significant changes in survival were observed. When paclitaxel was combined with gemcitabine, a growth inhibitory advantage was prominent in tumor spheroids, which was accompanied by significant cytotoxicity in PSCs.\n\n\nCONCLUSIONS\nWe demonstrated that cancer cells grown as tumor spheroids in a 3D collagen matrix and PSCs co-cultured in sub-millimeter proximity participate in mutual interactions that induce EMT and drug resistance in a microchannel plate. Microfluidic co-culture of pancreatic tumor spheroids with PSCs may serve as a useful model for studying EMT and drug resistance in a clinically relevant manner.",
"title": ""
},
{
"docid": "neg:1840205_4",
"text": "Recently, CNN reported on the future of brain-computer interfaces (BCIs). BCIs are devices that process a user's brain signals to allow direct communication and interaction with the environment. BCIs bypass the normal neuromuscular output pathways and rely on digital signal processing and machine learning to translate brain signals to action (Figure 1). Historically, BCIs were developed with biomedical applications in mind, such as restoring communication in completely paralyzed individuals and replacing lost motor function. More recent applications have targeted nondisabled individuals by exploring the use of BCIs as a novel input device for entertainment and gaming. The task of the BCI is to identify and predict behaviorally induced changes or \"cognitive states\" in a user's brain signals. Brain signals are recorded either noninvasively from electrodes placed on the scalp [electroencephalogram (EEG)] or invasively from electrodes placed on the surface of or inside the brain. BCIs based on these recording techniques have allowed healthy and disabled individuals to control a variety of devices. In this article, we will describe different challenges and proposed solutions for noninvasive brain-computer interfacing.",
"title": ""
},
{
"docid": "neg:1840205_5",
"text": "“Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the implied communication channel consists of both the experimental and biological system. Thus, the terms “neural code” are used inappropriately when “neuroexperimental code” would be more accurate, although less insightful. Second, the brain cannot be presumed to decode neural messages into objective properties of the world, since it never gets to observe those properties. To avoid dualism, codes must relate not to external properties but to internal sensorimotor models. Because this requires structured representations, neural assemblies cannot be the basis of such codes. Third, a message is informative to the extent that the reader understands its language. But the neural code is private to the encoder since only the message is communicated: each neuron speaks its own language. It follows that in the neural coding metaphor, the brain is a Tower of Babel. Finally, the relation between input signals and actions is circular; that inputs do not preexist to outputs makes the coding paradigm problematic. I conclude that the view that spikes are messages is generally not tenable. An alternative proposition is that action potentials are actions on other neurons and the environment, and neurons interact with each other rather than exchange messages. . CC-BY 4.0 International license not peer-reviewed) is the author/funder. It is made available under a The copyright holder for this preprint (which was . http://dx.doi.org/10.1101/168237 doi: bioRxiv preprint first posted online Jul. 27, 2017;",
"title": ""
},
{
"docid": "neg:1840205_6",
"text": "We present a novel adaptation technique for search engines to better support information-seeking activities that include both lookup and exploratory tasks. Building on previous findings, we describe (1) a classifier that recognizes task type (lookup vs. exploratory) as a user is searching and (2) a reinforcement learning based search engine that adapts accordingly the balance of exploration/exploitation in ranking the documents. This allows supporting both task types surreptitiously without changing the familiar list-based interface. Search results include more diverse results when users are exploring and more precise results for lookup tasks. Users found more useful results in exploratory tasks when compared to a base-line system, which is specifically tuned for lookup tasks.",
"title": ""
},
{
"docid": "neg:1840205_7",
"text": "With the continuous development of online learning platforms, educational data analytics and prediction have become a promising research field, which are helpful for the development of personalized learning system. However, the indicator's selection process does not combine with the whole learning process, which may affect the accuracy of prediction results. In this paper, we induce 19 behavior indicators in the online learning platform, proposing a student performance prediction model which combines with the whole learning process. The model consists of four parts: data collection and pre-processing, learning behavior analytics, algorithm model building and prediction. Moreover, we apply an optimized Logistic Regression algorithm, taking a case to analyze students' behavior and to predict their performance. Experimental results demonstrate that these eigenvalues can effectively predict whether a student was probably to have an excellent grade.",
"title": ""
},
{
"docid": "neg:1840205_8",
"text": "Optical flow computation is a key component in many computer vision systems designed for tasks such as action detection or activity recognition. However, despite several major advances over the last decade, handling large displacement in optical flow remains an open problem. Inspired by the large displacement optical flow of Brox and Malik, our approach, termed Deep Flow, blends a matching algorithm with a variational approach for optical flow. We propose a descriptor matching algorithm, tailored to the optical flow problem, that allows to boost performance on fast motions. The matching algorithm builds upon a multi-stage architecture with 6 layers, interleaving convolutions and max-pooling, a construction akin to deep convolutional nets. Using dense sampling, it allows to efficiently retrieve quasi-dense correspondences, and enjoys a built-in smoothing effect on descriptors matches, a valuable asset for integration into an energy minimization framework for optical flow estimation. Deep Flow efficiently handles large displacements occurring in realistic videos, and shows competitive performance on optical flow benchmarks. Furthermore, it sets a new state-of-the-art on the MPI-Sintel dataset.",
"title": ""
},
{
"docid": "neg:1840205_9",
"text": "Sparse reward is one of the most challenging problems in reinforcement learning (RL). Hindsight Experience Replay (HER) attempts to address this issue by converting a failure experience to a successful one by relabeling the goals. Despite its effectiveness, HER has limited applicability because it lacks a compact and universal goal representation. We present Augmenting experienCe via TeacheR’s adviCE (ACTRCE), an efficient reinforcement learning technique that extends the HER framework using natural language as the goal representation. We first analyze the differences among goal representation, and show that ACTRCE can efficiently solve difficult reinforcement learning problems in challenging 3D navigation tasks, whereas HER with non-language goal representation failed to learn. We also show that with language goal representations, the agent can generalize to unseen instructions, and even generalize to instructions with unseen lexicons. We further demonstrate it is crucial to use hindsight advice to solve challenging tasks, but we also found that little amount of hindsight advice is sufficient for the learning to take off, showing the practical aspect of the method.",
"title": ""
},
{
"docid": "neg:1840205_10",
"text": "Facial expression recognizers based on handcrafted features have achieved satisfactory performance on many databases. Recently, deep neural networks, e. g. deep convolutional neural networks (CNNs) have been shown to boost performance on vision tasks. However, the mechanisms exploited by CNNs are not well established. In this paper, we establish the existence and utility of feature maps selective to action units in a deep CNN trained by transfer learning. We transfer a network pre-trained on the Image-Net dataset to the facial expression recognition task using the Karolinska Directed Emotional Faces (KDEF), Radboud Faces Database(RaFD) and extended Cohn-Kanade (CK+) database. We demonstrate that higher convolutional layers of the deep CNN trained on generic images are selective to facial action units. We also show that feature selection is critical in achieving robustness, with action unit selective feature maps being more critical in the facial expression recognition task. These results support the hypothesis that both human and deeply learned CNNs use similar mechanisms for recognizing facial expressions.",
"title": ""
},
{
"docid": "neg:1840205_11",
"text": "In an effort to explain pro-environmental behavior, environmental sociologists often study environmental attitudes. While much of this work is atheoretical, the focus on attitudes suggests that researchers are implicitly drawing upon attitude theory in psychology. The present research brings sociological theory to environmental sociology by drawing on identity theory to understand environmentally responsive behavior. We develop an environment identity model of environmental behavior that includes not only the meanings of the environment identity, but also the prominence and salience of the environment identity and commitment to the environment identity. We examine the identity process as it relates to behavior, though not to the exclusion of examining the effects of environmental attitudes. The findings reveal that individual agency is important in influencing environmentally responsive behavior, but this agency is largely through identity processes, rather than attitude processes. This provides an important theoretical and empirical advance over earlier work in environmental sociology.",
"title": ""
},
{
"docid": "neg:1840205_12",
"text": "In this paper, we propose a new descriptor for texture classification that is robust to image blurring. The descriptor utilizes phase information computed locally in a window for every image position. The phases of the four low-frequency coefficients are decorrelated and uniformly quantized in an eight-dimensional space. A histogram of the resulting code words is created and used as a feature in texture classification. Ideally, the low-frequency phase components are shown to be invariant to centrally symmetric blur. Although this ideal invariance is not completely achieved due to the finite window size, the method is still highly insensitive to blur. Because only phase information is used, the method is also invariant to uniform illumination changes. According to our experiments, the classification accuracy of blurred texture images is much higher with the new method than with the well-known LBP or Gabor filter bank methods. Interestingly, it is also slightly better for textures that are not blurred.",
"title": ""
},
{
"docid": "neg:1840205_13",
"text": "Wearable technology comprises miniaturized sensors (eg, accelerometers) worn on the body and/or paired with mobile devices (eg, smart phones) allowing continuous patient monitoring in unsupervised, habitual environments (termed free-living). Wearable technologies are revolutionizing approaches to health care as a result of their utility, accessibility, and affordability. They are positioned to transform Parkinson's disease (PD) management through the provision of individualized, comprehensive, and representative data. This is particularly relevant in PD where symptoms are often triggered by task and free-living environmental challenges that cannot be replicated with sufficient veracity elsewhere. This review concerns use of wearable technology in free-living environments for people with PD. It outlines the potential advantages of wearable technologies and evidence for these to accurately detect and measure clinically relevant features including motor symptoms, falls risk, freezing of gait, gait, functional mobility, and physical activity. Technological limitations and challenges are highlighted, and advances concerning broader aspects are discussed. Recommendations to overcome key challenges are made. To date there is no fully validated system to monitor clinical features or activities in free-living environments. Robust accuracy and validity metrics for some features have been reported, and wearable technology may be used in these cases with a degree of confidence. Utility and acceptability appears reasonable, although testing has largely been informal. Key recommendations include adopting a multidisciplinary approach for standardizing definitions, protocols, and outcomes. Robust validation of developed algorithms and sensor-based metrics is required along with testing of utility. These advances are required before widespread clinical adoption of wearable technology can be realized. © 2016 International Parkinson and Movement Disorder Society.",
"title": ""
},
{
"docid": "neg:1840205_14",
"text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.",
"title": ""
},
{
"docid": "neg:1840205_15",
"text": "Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the observed data is generated by a system or process which is driven by a small number of latent (not directly observed or measured) variables. Projections of the observed data to its latent structure by means of PLS was developed by Herman Wold and coworkers [48, 49, 52]. PLS has received a great amount of attention in the field of chemometrics. The algorithm has become a standard tool for processing a wide spectrum of chemical data problems. The success of PLS in chemometrics resulted in a lot of applications in other scientific areas including bioinformatics, food research, medicine, pharmacology, social sciences, physiology–to name but a few [28, 25, 53, 29, 18, 22]. This chapter introduces the main concepts of PLS and provides an overview of its application to different data analysis problems. Our aim is to present a concise introduction, that is, a valuable guide for anyone who is concerned with data analysis. In its general form PLS creates orthogonal score vectors (also called latent vectors or components) by maximising the covariance between different sets of variables. PLS dealing with two blocks of variables is considered in this chapter, although the PLS extensions to model relations among a higher number of sets exist [44, 46, 47, 48, 39]. PLS is similar to Canonical Correlation Analysis (CCA) where latent vectors with maximal correlation are extracted [24]. There are different PLS techniques to extract latent vectors, and each of them gives rise to a variant of PLS. PLS can be naturally extended to regression problems. The predictor and predicted (response) variables are each considered as a block of variables. PLS then extracts the score vectors which serve as a new predictor representation",
"title": ""
},
{
"docid": "neg:1840205_16",
"text": "Ciphertext Policy Attribute-Based Encryption (CP-ABE) enforces expressive data access policies and each policy consists of a number of attributes. Most existing CP-ABE schemes incur a very large ciphertext size, which increases linearly with respect to the number of attributes in the access policy. Recently, Herranz proposed a construction of CP-ABE with constant ciphertext. However, Herranz do not consider the recipients' anonymity and the access policies are exposed to potential malicious attackers. On the other hand, existing privacy preserving schemes protect the anonymity but require bulky, linearly increasing ciphertext size. In this paper, we proposed a new construction of CP-ABE, named Privacy Preserving Constant CP-ABE (denoted as PP-CP-ABE) that significantly reduces the ciphertext to a constant size with any given number of attributes. Furthermore, PP-CP-ABE leverages a hidden policy construction such that the recipients' privacy is preserved efficiently. As far as we know, PP-CP-ABE is the first construction with such properties. Furthermore, we developed a Privacy Preserving Attribute-Based Broadcast Encryption (PP-AB-BE) scheme. Compared to existing Broadcast Encryption (BE) schemes, PP-AB-BE is more flexible because a broadcasted message can be encrypted by an expressive hidden access policy, either with or without explicit specifying the receivers. Moreover, PP-AB-BE significantly reduces the storage and communication overhead to the order of O(log N), where N is the system size. Also, we proved, using information theoretical approaches, PP-AB-BE attains minimal bound on storage overhead for each user to cover all possible subgroups in the communication system.",
"title": ""
},
{
"docid": "neg:1840205_17",
"text": "is granted to distribute this article for nonprofit, educational purposes if it is copied in its entirety and the journal is credited. PARE has the right to authorize third party reproduction of this article in print, electronic and database forms. Researchers occasionally have to work with an extremely small sample size, defined herein as N ≤ 5. Some methodologists have cautioned against using the t-test when the sample size is extremely small, whereas others have suggested that using the t-test is feasible in such a case. The present simulation study estimated the Type I error rate and statistical power of the one-and two-sample t-tests for normally distributed populations and for various distortions such as unequal sample sizes, unequal variances, the combination of unequal sample sizes and unequal variances, and a lognormal population distribution. Ns per group were varied between 2 and 5. Results show that the t-test provides Type I error rates close to the 5% nominal value in most of the cases, and that acceptable power (i.e., 80%) is reached only if the effect size is very large. This study also investigated the behavior of the Welch test and a rank-transformation prior to conducting the t-test (t-testR). Compared to the regular t-test, the Welch test tends to reduce statistical power and the t-testR yields false positive rates that deviate from 5%. This study further shows that a paired t-test is feasible with extremely small Ns if the within-pair correlation is high. It is concluded that there are no principal objections to using a t-test with Ns as small as 2. A final cautionary note is made on the credibility of research findings when sample sizes are small. The dictum \" more is better \" certainly applies to statistical inference. According to the law of large numbers, a larger sample size implies that confidence intervals are narrower and that more reliable conclusions can be reached. The reality is that researchers are usually far from the ideal \" mega-trial \" performed with 10,000 subjects (cf. Ioannidis, 2013) and will have to work with much smaller samples instead. For a variety of reasons, such as budget, time, or ethical constraints, it may not be possible to gather a large sample. In some fields of science, such as research on rare animal species, persons having a rare illness, or prodigies scoring at the extreme of an ability distribution (e.g., Ruthsatz & Urbach, 2012), …",
"title": ""
},
{
"docid": "neg:1840205_18",
"text": "Multiple, often conflicting objectives arise naturally in most real-world optimization scenarios. As evolutionary algorithms possess several characteristics that are desirable for this type of problem, this class of search strategies has been used for multiobjective optimization for more than a decade. Meanwhile evolutionary multiobjective optimization has become established as a separate subdiscipline combining the fields of evolutionary computation and classical multiple criteria decision making. This paper gives an overview of evolutionary multiobjective optimization with the focus on methods and theory. On the one hand, basic principles of multiobjective optimization and evolutionary algorithms are presented, and various algorithmic concepts such as fitness assignment, diversity preservation, and elitism are discussed. On the other hand, the tutorial includes some recent theoretical results on the performance of multiobjective evolutionary algorithms and addresses the question of how to simplify the exchange of methods and applications by means of a standardized interface.",
"title": ""
}
] |
1840206 | Trajectory similarity measures | [
{
"docid": "pos:1840206_0",
"text": "We investigate techniques for analysis and retrieval of object trajectories in a two or three dimensional space. Such kind of data usually contain a great amount of noise, that makes all previously used metrics fail. Therefore, here we formalize non-metric similarity functions based on the Longest Common Subsequence (LCSS), which are very robust to noise and furthermore provide an intuitive notion of similarity between trajectories by giving more weight to the similar portions of the sequences. Stretching of sequences in time is allowed, as well as global translating of the sequences in space. Efficient approximate algorithms that compute these similarity measures are also provided. We compare these new methods to the widely used Euclidean and Time Warping distance functions (for real and synthetic data) and show the superiority of our approach, especially under the strong presence of noise. We prove a weaker version of the triangle inequality and employ it in an indexing structure to answer nearest neighbor queries. Finally, we present experimental results that validate the accuracy and efficiency of our approach.",
"title": ""
}
] | [
{
"docid": "neg:1840206_0",
"text": "Some natural language processing tasks can be learned from example corpora, but having enough examples for the task at hands can be a bottleneck. In this work we address how Wikipedia and DBpedia, two freely available language resources, can be used to support Named Entity Recognition, a fundamental task in Information Extraction and a necessary step of other tasks such as Co-reference Resolution and Relation Extraction.",
"title": ""
},
{
"docid": "neg:1840206_1",
"text": "Modern data analysis stands at the interface of statistics, computer science, and discrete mathematics. This volume describes new methods in this area, with special emphasis on classification and cluster analysis. Those methods are applied to problems in information retrieval, phylogeny, medical dia... This is the first book primarily dedicated to clustering using multiobjective genetic algorithms with extensive real-life applications in data mining and bioinformatics. The authors first offer detailed introductions to the relevant techniques-genetic algorithms, multiobjective optimization, soft ...",
"title": ""
},
{
"docid": "neg:1840206_2",
"text": "It is found that the current manual water quality monitoring entails tedious process and is time consuming. To alleviate the problems caused by the manual monitoring and the lack of effective system for prawn farming, a remote water quality monitoring for prawn farming pond is proposed. The proposed system is leveraging on wireless sensors in detecting the water quality and Short Message Service (SMS) technology in delivering alert to the farmers upon detection of degradation of the water quality. Three water quality parameters that are critical to the prawn health are monitored, which are pH, temperature and dissolved oxygen. In this paper, the details of system design and implementation are presented. The results obtained in the preliminary survey study served as the basis for the development of the system prototype. Meanwhile, the results acquired through the usability testing imply that the system is able to meet the users’ needs. Key-Words: Remote monitoring, Water Quality, Wireless sensors",
"title": ""
},
{
"docid": "neg:1840206_3",
"text": "Thin film bulk acoustic wave resonators (FBAR) using piezoelectric AlN thin films have attracted extensive research activities in the past few years. Highly c-axis oriented AlN thin films are particularly investigated for resonators operating at the fundamental thickness longitudinal mode. Depending on the processing conditions, tilted polarization (c-axis off the normal direction to the substrate surface) is often found for the as-deposited AlN thin films, which may leads to the coexistence of thickness longitudinal mode and shear mode for the thin film resonators. Knowing that the material properties are strongly crystalline orientation dependent for AlN thin films, a theoretical study is conducted to reveal the effect of tilted polarization on the frequency characteristics of thin film resonators. The input electric impedance of a thin film resonator is derived that includes both thickness longitudinal and thickness shear modes in a uniform equation. Based on the theoretical model, the effective material properties corresponding to the longitudinal and shear modes are calculated through the properties transformation between the original and new coordinate systems. The electric impedance spectra of dual mode AlN thin film resonators are calculated using appropriate materials properties and compared with experimental results. The results indicate that the frequency characteristics of thin film resonators vary with the tilted polarization angles. The coexistence of thickness longitudinal and shear modes in the thin film resonators may provide some flexibility in the design and fabrication of the FBAR devices.",
"title": ""
},
{
"docid": "neg:1840206_4",
"text": "Unstructured data, such as news and blogs, can provide valuable insights into the financial world. We present the NewsStream portal, an intuitive and easy-to-use tool for news analytics, which supports interactive querying and visualizations of the documents at different levels of detail. It relies on a scalable architecture for real-time processing of a continuous stream of textual data, which incorporates data acquisition, cleaning, natural-language preprocessing and semantic annotation components. It has been running for over two years and collected over 18 million news articles and blog posts. The NewsStream portal can be used to answer the questions when, how often, in what context, and with what sentiment was a financial entity or term mentioned in a continuous stream of news and blogs, and therefore providing a complement to news aggregators. We illustrate some features of our system in four use cases: relations between the rating agencies and the PIIGS countries, reflection of financial news on credit default swap (CDS) prices, the emergence of the Bitcoin digital currency, and visualizing how the world is connected through news.",
"title": ""
},
{
"docid": "neg:1840206_5",
"text": "A new method, modified Bagging (mBagging) of Maximal Information Coefficient (mBoMIC), was developed for genome-wide identification. Traditional Bagging is inadequate to meet some requirements of genome-wide identification, in terms of statistical performance and time cost. To improve statistical performance and reduce time cost, an mBagging was developed to introduce Maximal Information Coefficient (MIC) into genomewide identification. The mBoMIC overcame the weakness of original MIC, i.e., the statistical power is inadequate and MIC values are volatile. The three incompatible measures of Bagging, i.e. time cost, statistical power and false positive rate, were significantly improved simultaneously. Compared with traditional Bagging, mBagging reduced time cost by 80%, improved statistical power by 15%, and decreased false positive rate by 31%. The mBoMIC has sensitivity and university in genome-wide identification. The SNPs identified only by mBoMIC have been reported as SNPs associated with cardiac disease.",
"title": ""
},
{
"docid": "neg:1840206_6",
"text": "Reconstruction of massive abdominal wall defects has long been a vexing clinical problem. A landmark development for the autogenous tissue reconstruction of these difficult wounds was the introduction of \"components of anatomic separation\" technique by Ramirez et al. This method uses bilateral, innervated, bipedicle, rectus abdominis-transversus abdominis-internal oblique muscle flap complexes transposed medially to reconstruct the central abdominal wall. Enamored with this concept, this institution sought to define the limitations and complications and to quantify functional outcome with the use of this technique. During a 4-year period (July of 1991 to 1995), 22 patients underwent reconstruction of massive midline abdominal wounds. The defects varied in size from 6 to 14 cm in width and from 10 to 24 cm in height. Causes included removal of infected synthetic mesh material (n = 7), recurrent hernia (n = 4), removal of split-thickness skin graft and dense abdominal wall cicatrix (n = 4), parastomal hernia (n = 2), primary incisional hernia (n = 2), trauma/enteric sepsis (n = 2), and tumor resection (abdominal wall desmoid tumor involving the right rectus abdominis muscle) (n = 1). Twenty patients were treated with mobilization of both rectus abdominis muscles, and in two patients one muscle complex was used. The plane of \"separation\" was the interface between the external and internal oblique muscles. A quantitative dynamic assessment of the abdominal wall was performed in two patients by using a Cybex TEF machine, with analysis of truncal flexion strength being undertaken preoperatively and at 6 months after surgery. Patients achieved wound healing in all cases with one operation. Minor complications included superficial infection in two patients and a wound seroma in one. One patient developed a recurrent incisional hernia 8 months postoperatively. There was one postoperative death caused by multisystem organ failure. One patient required the addition of synthetic mesh to achieve abdominal closure. This case involved a thin patient whose defect exceeded 16 cm in width. There has been no clinically apparent muscle weakness in the abdomen over that present preoperatively. Analysis of preoperative and postoperative truncal force generation revealed a 40 percent increase in strength in the two patients tested on a Cybex machine. Reoperation was possible through the reconstructed abdominal wall in two patients without untoward sequela. This operation is an effective method for autogenous reconstruction of massive midline abdominal wall defects. It can be used either as a primary mode of defect closure or to treat the complications of trauma, surgery, or various diseases.",
"title": ""
},
{
"docid": "neg:1840206_7",
"text": "Species of lactic acid bacteria (LAB) represent as potential microorganisms and have been widely applied in food fermentation worldwide. Milk fermentation process has been relied on the activity of LAB, where transformation of milk to good quality of fermented milk products made possible. The presence of LAB in milk fermentation can be either as spontaneous or inoculated starter cultures. Both of them are promising cultures to be explored in fermented milk manufacture. LAB have a role in milk fermentation to produce acid which is important as preservative agents and generating flavour of the products. They also produce exopolysaccharides which are essential as texture formation. Considering the existing reports on several health-promoting properties as well as their generally recognized as safe (GRAS) status of LAB, they can be widely used in the developing of new fermented milk products.",
"title": ""
},
{
"docid": "neg:1840206_8",
"text": "Due to the low-dimensional property of clean hyperspectral images (HSIs), many low-rank-based methods have been proposed to denoise HSIs. However, in an HSI, the noise intensity in different bands is often different, and most of the existing methods do not take this fact into consideration. In this paper, a noise-adjusted iterative low-rank matrix approximation (NAILRMA) method is proposed for HSI denoising. Based on the low-rank property of HSIs, the patchwise low-rank matrix approximation (LRMA) is established. To further separate the noise from the signal subspaces, an iterative regularization framework is proposed. Considering that the noise intensity in different bands is different, an adaptive iteration factor selection based on the noise variance of each HSI band is adopted. This noise-adjusted iteration strategy can effectively preserve the high-SNR bands and denoise the low-SNR bands. The randomized singular value decomposition (RSVD) method is then utilized to solve the NAILRMA optimization problem. A number of experiments were conducted in both simulated and real data conditions to illustrate the performance of the proposed NAILRMA method for HSI denoising.",
"title": ""
},
{
"docid": "neg:1840206_9",
"text": "Data mining is the process of identifying the hidden patterns from large and complex data. It may provide crucial role in decision making for complex agricultural problems. Data visualisation is also equally important to understand the general trends of the effect of various factors influencing the crop yield. The present study examines the application of data visualisation techniques to find correlations between the climatic factors and rice crop yield. The study also applies data mining techniques to extract the knowledge from the historical agriculture data set to predict rice crop yield for Kharif season of Tropical Wet and Dry climatic zone of India. The data set has been visualised in Microsoft Office Excel using scatter plots. The classification algorithms have been executed in the free and open source data mining tool WEKA. The experimental results provided include sensitivity, specificity, accuracy, F1 score, Mathews correlation coefficient, mean absolute error, root mean squared error, relative absolute error and root relative squared error. General trends in the data visualisation show that decrease in precipitation in the selected climatic zone increases the rice crop yield and increase in minimum, average or maximum temperature for the season increases the rice crop yield. For the current data set experimental results show that J48 and LADTree achieved the highest accuracy, sensitivity and specificity. Classification performed by LWL classifier displayed the lowest accuracy, sensitivity and specificity results.",
"title": ""
},
{
"docid": "neg:1840206_10",
"text": "The pantograph-overhead contact wire system is investigated by using an infrared camera. As the pantograph has a vertical motion because of the non-uniform elasticity of the catenary, in order to detect the temperature along the strip from a sequence of infrared images, a segment-tracking algorithm, based on the Hough transformation, has been employed. An analysis of the stored images could help maintenance operations revealing, for example, overheating of the pantograph strip, bursts of arcing, or an irregular positioning of the contact line. Obtained results are relevant for monitoring the status of the quality transmission of the current and for a predictive maintenance of the pantograph and of the catenary system. Examples of analysis from experimental data are reported in the paper.",
"title": ""
},
{
"docid": "neg:1840206_11",
"text": "We review recent breakthroughs in the silicon photonic technology and components, and describe progress in silicon photonic integrated circuits. Heterogeneous silicon photonics has recently demonstrated performance that significantly outperforms native III/V components. The impact active silicon photonic integrated circuits could have on interconnects, telecommunications, sensors, and silicon electronics is reviewed.",
"title": ""
},
{
"docid": "neg:1840206_12",
"text": "Traditional media outlets are known to report political news in a biased way, potentially affecting the political beliefs of the audience and even altering their voting behaviors. Therefore, tracking bias in everyday news and building a platform where people can receive balanced news information is important. We propose a model that maps the news media sources along a dimensional dichotomous political spectrum using the co-subscriptions relationships inferred by Twitter links. By analyzing 7 million follow links, we show that the political dichotomy naturally arises on Twitter when we only consider direct media subscription. Furthermore, we demonstrate a real-time Twitter-based application that visualizes an ideological map of various media sources.",
"title": ""
},
{
"docid": "neg:1840206_13",
"text": "In this work we investigate the problem of road scene semanti c segmentation using Deconvolutional Networks (DNs). Several c onstraints limit the practical performance of DNs in this context: firstly, the pa ucity of existing pixelwise labelled training data, and secondly, the memory const rai ts of embedded hardware, which rule out the practical use of state-of-theart DN architectures such as fully convolutional networks (FCN). To address the fi rst constraint, we introduce a Multi-Domain Road Scene Semantic Segmentation (M DRS3) dataset, aggregating data from six existing densely and sparsely lab elled datasets for training our models, and two existing, separate datasets for test ing their generalisation performance. We show that, while MDRS3 offers a greater volu me and variety of data, end-to-end training of a memory efficient DN does not yield satisfactory performance. We propose a new training strategy to over c me this, based on (i) the creation of a best-possible source network (S-Net ) from the aggregated data, ignoring time and memory constraints; and (ii) the tra nsfer of knowledge from S-Net to the memory-efficient target network (T-Net). W e evaluate different techniques for S-Net creation and T-Net transferral, and de monstrate that training a constrained deconvolutional network in this manner can un lock better performance than existing training approaches. Specifically, we s how that a target network can be trained to achieve improved accuracy versus an FC N despite using less than 1% of the memory. We believe that our approach can be useful beyond automotive scenarios where labelled data is similarly scar ce o fragmented and where practical constraints exist on the desired model size . We make available our network models and aggregated multi-domain dataset for reproducibility.",
"title": ""
},
{
"docid": "neg:1840206_14",
"text": "In this paper, an action planning algorithm is presented for a reconfigurable hybrid leg–wheel mobile robot. Hybrid leg–wheel robots have recently receiving growing interest from the space community to explore planets, as they offer a solution to improve speed and mobility on uneven terrain. One critical issue connected with them is the study of an appropriate strategy to define when to use one over the other locomotion mode, depending on the soil properties and topology. Although this step is crucial to reach the full hybrid mechanism’s potential, little attention has been devoted to this topic. Given an elevation map of the environment, we developed an action planner that selects the appropriate locomotion mode along an optimal path toward a point of scientific interest. This tool is helpful for the space mission team to decide the next move of the robot during the exploration. First, a candidate path is generated based on topology and specifications’ criteria functions. Then, switching actions are defined along this path based on the robot’s performance in each motion mode. Finally, the path is rated based on the energy profile evaluated using a dynamic simulator. The proposed approach is applied to a concept prototype of a reconfigurable hybrid wheel–leg robot for planetary exploration through extensive simulations and real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2010",
"title": ""
},
{
"docid": "neg:1840206_15",
"text": "It has been estimated that in urban scenarios up to 30% of the traffic is due to vehicles looking for a free parking space. Thanks to recent technological evolutions, it is now possible to have at least a partial coverage of real-time data of parking space availability, and some preliminary mobile services are able to guide drivers towards free parking spaces. Nevertheless, the integration of this data within car navigators is challenging, mainly because (I) current In-Vehicle Telematic systems are not connected, and (II) they have strong limitations in terms of storage capabilities. To overcome these issues, in this paper we present a back-end based approach to learn historical models of parking availability per street. These compact models can then be easily stored on the map in the vehicle. In particular, we investigate the trade-off between the granularity level of the detailed spatial and temporal representation of parking space availability vs. The achievable prediction accuracy, using different spatio-temporal clustering strategies. The proposed solution is evaluated using five months of parking availability data, publicly available from the project Spark, based in San Francisco. Results show that clustering can reduce the needed storage up to 99%, still having an accuracy of around 70% in the predictions.",
"title": ""
},
{
"docid": "neg:1840206_16",
"text": "In this paper, we present a complete system for automatic face replacement in images. Our system uses a large library of face images created automatically by downloading images from the internet, extracting faces using face detection software, and aligning each extracted face to a common coordinate system. This library is constructed off-line, once, and can be efficiently accessed during face replacement. Our replacement algorithm has three main stages. First, given an input image, we detect all faces that are present, align them to the coordinate system used by our face library, and select candidate face images from our face library that are similar to the input face in appearance and pose. Second, we adjust the pose, lighting, and color of the candidate face images to match the appearance of those in the input image, and seamlessly blend in the results. Third, we rank the blended candidate replacements by computing a match distance over the overlap region. Our approach requires no 3D model, is fully automatic, and generates highly plausible results across a wide range of skin tones, lighting conditions, and viewpoints. We show how our approach can be used for a variety of applications including face de-identification and the creation of appealing group photographs from a set of images. We conclude with a user study that validates the high quality of our replacement results, and a discussion on the current limitations of our system.",
"title": ""
},
{
"docid": "neg:1840206_17",
"text": "Psychedelic drug flashbacks have been a puzzling clinical phenomenon observed by clinicians. Flashbacks are defined as transient, spontaneous recurrences of the psychedelic drug effect appearing after a period of normalcy following an intoxication of psychedelics. The paper traces the evolution of the concept of flashback and gives examples of the varieties encountered. Although many drugs have been advocated for the treatment of flashback, flashbacks generally decrease in intensity and frequency with abstinence from psychedelic drugs.",
"title": ""
},
{
"docid": "neg:1840206_18",
"text": "Partial shading of a photovoltaic array is the condition under which different modules in the array experience different irradiance levels due to shading. This difference causes mismatch between the modules, leading to undesirable effects such as reduction in generated power and hot spots. The severity of these effects can be considerably reduced by photovoltaic array reconfiguration. This paper proposes a novel mathematical formulation for the optimal reconfiguration of photovoltaic arrays to minimize partial shading losses. The paper formulates the reconfiguration problem as a mixed integer quadratic programming problem and finds the optimal solution using a branch and bound algorithm. The proposed formulation can be used for an equal or nonequal number of modules per row. Moreover, it can be used for fully reconfigurable or partially reconfigurable arrays. The improvement resulting from the reconfiguration with respect to the existing photovoltaic interconnections is demonstrated by extensive simulation results.",
"title": ""
}
] |
1840207 | Variations in cognitive maps: understanding individual differences in navigation. | [
{
"docid": "pos:1840207_0",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] | [
{
"docid": "neg:1840207_0",
"text": "Following your need to always fulfil the inspiration to obtain everybody is now simple. Connecting to the internet is one of the short cuts to do. There are so many sources that offer and connect us to other world condition. As one of the products to see in internet, this website becomes a very available place to look for countless ggplot2 elegant graphics for data analysis sources. Yeah, sources about the books from countries in the world are provided.",
"title": ""
},
{
"docid": "neg:1840207_1",
"text": "Nuclear transfer of an oocyte into the cytoplasm of another enucleated oocyte has shown that embryogenesis and implantation are influenced by cytoplasmic factors. We report a case of a 30-year-old nulligravida woman who had two failed IVF cycles characterized by all her embryos arresting at the two-cell stage and ultimately had pronuclear transfer using donor oocytes. After her third IVF cycle, eight out of 12 patient oocytes and 12 out of 15 donor oocytes were fertilized. The patient's pronuclei were transferred subzonally into an enucleated donor cytoplasm resulting in seven reconstructed zygotes. Five viable reconstructed embryos were transferred into the patient's uterus resulting in a triplet pregnancy with fetal heartbeats, normal karyotypes and nuclear genetic fingerprinting matching the mother's genetic fingerprinting. Fetal mitochondrial DNA profiles were identical to those from donor cytoplasm with no detection of patient's mitochondrial DNA. This report suggests that a potentially viable pregnancy with normal karyotype can be achieved through pronuclear transfer. Ongoing work to establish the efficacy and safety of pronuclear transfer will result in its use as an aid for human reproduction.",
"title": ""
},
{
"docid": "neg:1840207_2",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "neg:1840207_3",
"text": "An intelligent observer looks at the world and sees not only what is, but what is moving and what can be moved. In other words, the observer sees how the present state of the world can transform in the future. We propose a model that predicts future images by learning to represent the present state and its transformation given only a sequence of images. To do so, we introduce an architecture with a latent state composed of two components designed to capture (i) the present image state and (ii) the transformation between present and future states, respectively. We couple this latent state with a recurrent neural network (RNN) core that predicts future frames by transforming past states into future states by applying the accumulated state transformation with a learned operator. We describe how this model can be integrated into an encoder-decoder convolutional neural network (CNN) architecture that uses weighted residual connections to integrate representations of the past with representations of the future. Qualitatively, our approach generates image sequences that are stable and capture realistic motion over multiple predicted frames, without requiring adversarial training. Quantitatively, our method achieves prediction results comparable to state-of-the-art results on standard image prediction benchmarks (Moving MNIST, KTH, and UCF101).",
"title": ""
},
{
"docid": "neg:1840207_4",
"text": "Nowadays, Telemarketing is an interactive technique of direct marketing that many banks apply to present a long term deposit to bank customers via the phone. Although the offering like this manner is powerful, it may make the customers annoyed. The data prediction is a popular task in data mining because it can be applied to solve this problem. However, the predictive performance may be decreased in case of the input data have many features like the bank customer information. In this paper, we focus on how to reduce the feature of input data and balance the training set for the predictive model to help the bank to increase the prediction rate. In the system performance evaluation, all accuracy rates of each predictive model based on the proposed approach compared with the original predictive model based on the truth positive and receiver operating characteristic measurement show the high performance in which the smaller number of features.",
"title": ""
},
{
"docid": "neg:1840207_5",
"text": "Pedestrian detection has progressed significantly in the last years. However, occluded people are notoriously hard to detect, as their appearance varies substantially depending on a wide range of occlusion patterns. In this paper, we aim to propose a simple and compact method based on the FasterRCNN architecture for occluded pedestrian detection. We start with interpreting CNN channel features of a pedestrian detector, and we find that different channels activate responses for different body parts respectively. These findings motivate us to employ an attention mechanism across channels to represent various occlusion patterns in one single model, as each occlusion pattern can be formulated as some specific combination of body parts. Therefore, an attention network with self or external guidances is proposed as an add-on to the baseline FasterRCNN detector. When evaluating on the heavy occlusion subset, we achieve a significant improvement of 8pp to the baseline FasterRCNN detector on CityPersons and on Caltech we outperform the state-of-the-art method by 4pp.",
"title": ""
},
{
"docid": "neg:1840207_6",
"text": "New high-frequency data collection technologies and machine learning analysis techniques could offer new insights into learning, especially in tasks in which students have ample space to generate unique, personalized artifacts, such as a computer program, a robot, or a solution to an engineering challenge. To date most of the work on learning analytics and educational data mining has focused on online courses or cognitive tutors, in which the tasks are more structured and the entirety of interaction happens in front of a computer. In this paper, I argue that multimodal learning analytics could offer new insights into students' learning trajectories, and present several examples of this work and its educational application.",
"title": ""
},
{
"docid": "neg:1840207_7",
"text": "Visible light communication is innovative and active technique in modern digital wireless communication. In this paper, we describe a new innovative vlc system which having a better performance and efficiency to other previous system. Visible light communication (VLC) is an efficient technology in order to the improve the speed and the robustness of the communication link in indoor optical wireless communication system. In order to achieve high data rate in communication for VLC system, multiple input multiple output (MIMO) with OFDM is a feasible option. However, the contemporary MIMO with OFDM VLC system are lacks from the diversity and the experiences performance variation through different optical channels. This is mostly because of characteristics of optical elements used for making the receiver. In this paper, we analyze the imaging diversity in MIMO with OFDM VLC system. Simulation results are shown diversity achieved in the different cases.",
"title": ""
},
{
"docid": "neg:1840207_8",
"text": "The analysis of time-oriented data is an important task in many application scenarios. In recent years, a variety of techniques for visualizing such data have been published. This variety makes it difficult for prospective users to select methods or tools that are useful for their particular task at hand. In this article, we develop and discuss a systematic view on the diversity of methods for visualizing time-oriented data. With the proposed categorization we try to untangle the visualization of time-oriented data, which is such an important concern in Visual Analytics. The categorization is not only helpful for users, but also for researchers to identify future tasks in Visual Analytics. r 2007 Elsevier Ltd. All rights reserved. MSC: primary 68U05; 68U35",
"title": ""
},
{
"docid": "neg:1840207_9",
"text": "The performance of face detection has been largely improved with the development of convolutional neural network. However, the occlusion issue due to mask and sunglasses, is still a challenging problem. The improvement on the recall of these occluded cases usually brings the risk of high false positives. In this paper, we present a novel face detector called Face Attention Network (FAN), which can significantly improve the recall of the face detection problem in the occluded case without compromising the speed. More specifically, we propose a new anchor-level attention, which will highlight the features from the face region. Integrated with our anchor assign strategy and data augmentation techniques, we obtain state-of-art results on public face ∗Equal contribution. †Work was done during an internship at Megvii Research. detection benchmarks like WiderFace and MAFA. The code will be released for reproduction.",
"title": ""
},
{
"docid": "neg:1840207_10",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "neg:1840207_11",
"text": "$$\\mathcal{Q}$$ -learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for $$\\mathcal{Q}$$ -learning based on that outlined in Watkins (1989). We show that $$\\mathcal{Q}$$ -learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many $$\\mathcal{Q}$$ values can be changed each iteration, rather than just one.",
"title": ""
},
{
"docid": "neg:1840207_12",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "neg:1840207_13",
"text": "This work combines the central ideas from two different areas, crowd simulation and social network analysis, to tackle some existing problems in both areas from a new angle. We present a novel spatio-temporal social crowd simulation framework, Social Flocks, to revisit three essential research problems, (a) generation of social networks, (b) community detection in social networks, (c) modeling collective social behaviors in crowd simulation. Our framework produces social networks that satisfy the properties of high clustering coefficient, low average path length, and power-law degree distribution. It can also be exploited as a novel dynamic model for community detection. Finally our framework can be used to produce real-life collective social behaviors over crowds, including community-guided flocking, leader following, and spatio-social information propagation. Social Flocks can serve as visualization of simulated crowds for domain experts to explore the dynamic effects of the spatial, temporal, and social factors on social networks. In addition, it provides an experimental platform of collective social behaviors for social gaming and movie animations. Social Flocks demo is at http://mslab.csie.ntu.edu.tw/socialflocks/ .",
"title": ""
},
{
"docid": "neg:1840207_14",
"text": "Up to this point in the text we have considered the use of the logistic regression model in settings where we observe a single dichotomous response for a sample of statistically independent subjects. However, there are settings where the assumption of independence of responses may not hold for a variety of reasons. For example, consider a study of asthma in children in which subjects are interviewed bi-monthly for 1 year. At each interview the date is recorded and the mother is asked whether, during the previous 2 months, her child had an asthma attack severe enough to require medical attention, whether the child had a chest cold, and how many smokers lived in the household. The child’s age and race are recorded at the first interview. The primary outcome is the occurrence of an asthma attack. What differs here is the lack of independence in the observations due to the fact that we have six measurements on each child. In this example, each child represents a cluster of correlated observations of the outcome. The measurements of the presence or absence of a chest cold and the number of smokers residing in the household can change from observation to observation and thus are called clusterspecific or time-varying covariates. The date changes in a systematic way and is recorded to model possible seasonal effects. The child’s age and race are constant for the duration of the study and are referred to as cluster-level or time-invariant covariates. The terms clusters, subjects, cluster-specific and cluster-level covariates are general enough to describe multiple measurements on a single subject or single measurements on different but related subjects. An example of the latter setting would be a study of all children in a household. Repeated measurements on the same subject or a subject clustered in some sort of unit (household, hospital, or physician) are the two most likely scenarios leading to correlated data.",
"title": ""
},
{
"docid": "neg:1840207_15",
"text": "UNLABELLED\nWe present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies.\n\n\nPAPERCLIP\nVIDEO ABSTRACT.",
"title": ""
},
{
"docid": "neg:1840207_16",
"text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "neg:1840207_17",
"text": "In the past years we have witnessed the emergence of the new discipline of computational social science, which promotes a new data-driven and computation-based approach to social sciences. In this article we discuss how the availability of new technologies such as online social media and mobile smartphones has allowed researchers to passively collect human behavioral data at a scale and a level of granularity that were just unthinkable some years ago. We also discuss how these digital traces can then be used to prove (or disprove) existing theories and develop new models of human behavior.",
"title": ""
},
{
"docid": "neg:1840207_18",
"text": "Crossbar architecture has been widely adopted in neural network accelerators due to the efficient implementations on vector-matrix multiplication operations. However, in the case of convolutional neural networks (CNNs), the efficiency is compromised dramatically because of the large amounts of data reuse. Although some mapping methods have been designed to achieve a balance between the execution throughput and resource overhead, the resource consumption cost is still huge while maintaining the throughput. Network pruning is a promising and widely studied method to shrink the model size, whereas prior work for CNNs compression rarely considered the crossbar architecture and the corresponding mapping method and cannot be directly utilized by crossbar-based neural network accelerators. This paper proposes a crossbar-aware pruning framework based on a formulated $L_{0}$ -norm constrained optimization problem. Specifically, we design an $L_{0}$ -norm constrained gradient descent with relaxant probabilistic projection to solve this problem. Two types of sparsity are successfully achieved: 1) intuitive crossbar-grain sparsity and 2) column-grain sparsity with output recombination, based on which we further propose an input feature maps reorder method to improve the model accuracy. We evaluate our crossbar-aware pruning framework on the median-scale CIFAR10 data set and the large-scale ImageNet data set with VGG and ResNet models. Our method is able to reduce the crossbar overhead by 44%–72% with insignificant accuracy degradation. This paper significantly reduce the resource overhead and the related energy cost and provides a new co-design solution for mapping CNNs onto various crossbar devices with much better efficiency.",
"title": ""
}
] |
1840208 | From Spin to Swindle: Identifying Falsification in Financial Text | [
{
"docid": "pos:1840208_0",
"text": "This paper examines the relationship between annual report readability and firm performance and earnings persistence. This is motivated by the Securities and Exchange Commission’s plain English disclosure regulations that attempt to make corporate disclosures easier to read for ordinary investors. I measure the readability of public company annual reports using both the Fog Index from computational linguistics and the length of the document. I find that the annual reports of firms with lower earnings are harder to read (i.e., they have higher Fog and are longer). Moreover, the positive earnings of firms with annual reports that are easier to read are more persistent. This suggests that managers may be opportunistically choosing the readability of annual reports to hide adverse information from investors.",
"title": ""
},
{
"docid": "pos:1840208_1",
"text": "Text summarization is the task of shortening text documents but retaining their overall meaning and information. A good summary should highlight the main concepts of any text document. Many statistical-based, location-based and linguistic-based techniques are available for text summarization. This paper has described a novel hybrid technique for automatic summarization of Punjabi text. Punjabi is an official language of Punjab State in India. There are very few linguistic resources available for Punjabi. The proposed summarization system is hybrid of conceptual-, statistical-, location- and linguistic-based features for Punjabi text. In this system, four new location-based features and two new statistical features (entropy measure and Z score) are used and results are very much encouraging. Support vector machine-based classifier is also used to classify Punjabi sentences into summary and non-summary sentences and to handle imbalanced data. Synthetic minority over-sampling technique is applied for over-sampling minority class data. Results of proposed system are compared with different baseline systems, and it is found that F score, Precision, Recall and ROUGE-2 score of our system are reasonably well as compared to other baseline systems. Moreover, summary quality of proposed system is comparable to the gold summary.",
"title": ""
}
] | [
{
"docid": "neg:1840208_0",
"text": "In this paper, we tackle challenges in migrating enterprise services into hybrid cloud-based deployments, where enterprise operations are partly hosted on-premise and partly in the cloud. Such hybrid architectures enable enterprises to benefit from cloud-based architectures, while honoring application performance requirements, and privacy restrictions on what services may be migrated to the cloud. We make several contributions. First, we highlight the complexity inherent in enterprise applications today in terms of their multi-tiered nature, large number of application components, and interdependencies. Second, we have developed a model to explore the benefits of a hybrid migration approach. Our model takes into account enterprise-specific constraints, cost savings, and increased transaction delays and wide-area communication costs that may result from the migration. Evaluations based on real enterprise applications and Azure-based cloud deployments show the benefits of a hybrid migration approach, and the importance of planning which components to migrate. Third, we shed insight on security policies associated with enterprise applications in data centers. We articulate the importance of ensuring assurable reconfiguration of security policies as enterprise applications are migrated to the cloud. We present algorithms to achieve this goal, and demonstrate their efficacy on realistic migration scenarios.",
"title": ""
},
{
"docid": "neg:1840208_1",
"text": "A benefit of model-driven engineering relies on the automatic generation of artefacts from high-level models through intermediary levels using model transformations. In such a process, the input must be well designed, and the model transformations should be trustworthy. Because of the specificities of models and transformations, classical software test techniques have to be adapted. Among these techniques, mutation analysis has been ported, and a set of mutation operators has been defined. However, it currently requires considerable manual work and suffers from the test data set improvement activity. This activity is a difficult and time-consuming job and reduces the benefits of the mutation analysis. This paper addresses the test data set improvement activity. Model transformation traceability in conjunction with a model of mutation operators and a dedicated algorithm allow to automatically or semi-automatically produce improved test models. The approach is validated and illustrated in two case studies written in Kermeta. Copyright © 2014 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840208_2",
"text": "An emerging class of data-intensive applications involve the geographically dispersed extraction of complex scientific information from very large collections of measured or computed data. Such applications arise, for example, in experimental physics, where the data in question is generated by accelerators, and in simulation science, where the data is generated by supercomputers. So-called Data Grids provide essential infrastructure for such applications, much as the Internet provides essential services for applications such as e-mail and the Web. We describe here two services that we believe are fundamental to any Data Grid: reliable, high-speed transport and replica management. Our high-speed transport service, GridFTP, extends the popular FTP protocol with new features required for Data Grid applications, such as striping and partial file access. Our replica management service integrates a replica catalog with GridFTP transfers to provide for the creation, registration, location, and management of dataset replicas. We present the design of both services and also preliminary performance results. Our implementations exploit security and other services provided by the Globus Toolkit.",
"title": ""
},
{
"docid": "neg:1840208_3",
"text": "The HER-2/neu oncogene is a member of the erbB-like oncogene family, and is related to, but distinct from, the epidermal growth factor receptor. This gene has been shown to be amplified in human breast cancer cell lines. In the current study, alterations of the gene in 189 primary human breast cancers were investigated. HER-2/neu was found to be amplified from 2- to greater than 20-fold in 30% of the tumors. Correlation of gene amplification with several disease parameters was evaluated. Amplification of the HER-2/neu gene was a significant predictor of both overall survival and time to relapse in patients with breast cancer. It retained its significance even when adjustments were made for other known prognostic factors. Moreover, HER-2/neu amplification had greater prognostic value than most currently used prognostic factors, including hormonal-receptor status, in lymph node-positive disease. These data indicate that this gene may play a role in the biologic behavior and/or pathogenesis of human breast cancer.",
"title": ""
},
{
"docid": "neg:1840208_4",
"text": "In high voltage (HV) flyback charging circuits, the importance of transformer parasitics holds a significant part in the overall system parasitics. The HV transformers have a larger number of turns on the secondary side that leads to higher self-capacitance which is inevitable. The conventional wire-wound transformer (CWT) has limitation over the design with larger self-capacitance including increased size and volume. For capacitive load in flyback charging circuit these self-capacitances on the secondary side gets added with device capacitances and dominates the load. For such applications the requirement is to have a transformer with minimum self-capacitances and low profile. In order to achieve the above requirements Planar Transformer (PT) design can be implemented with windings as tracks in Printed Circuit Boards (PCB) each layer is insulated by the FR4 material which aids better insulation. Finite Element Model (FEM) has been developed to obtain the self-capacitance in between the layers for larger turns on the secondary side. The modelled hardware prototype of the Planar Transformer has been characterised for open circuit and short circuit test using Frequency Response Analyser (FRA). The results obtained from FEM and FRA are compared and presented.",
"title": ""
},
{
"docid": "neg:1840208_5",
"text": "This paper proposes new approximate coloring and other related techniques which markedly improve the run time of the branchand-bound algorithm MCR (J. Global Optim., 37, 95–111, 2007), previously shown to be the fastest maximum-clique-finding algorithm for a large number of graphs. The algorithm obtained by introducing these new techniques in MCR is named MCS. It is shown that MCS is successful in reducing the search space quite efficiently with low overhead. Consequently, it is shown by extensive computational experiments that MCS is remarkably faster than MCR and other existing algorithms. It is faster than the other algorithms by an order of magnitude for several graphs. In particular, it is faster than MCR for difficult graphs of very high density and for very large and sparse graphs, even though MCS is not designed for any particular type of graphs. MCS can be faster than MCR by a factor of more than 100,000 for some extremely dense random graphs.",
"title": ""
},
{
"docid": "neg:1840208_6",
"text": "Children quickly acquire basic grammatical facts about their native language. Does this early syntactic knowledge involve knowledge of words or rules? According to lexical accounts of acquisition, abstract syntactic and semantic categories are not primitive to the language-acquisition system; thus, early language comprehension and production are based on verb-specific knowledge. The present experiments challenge this account: We probed the abstractness of young children's knowledge of syntax by testing whether 25- and 21-month-olds extend their knowledge of English word order to new verbs. In four experiments, children used word order appropriately to interpret transitive sentences containing novel verbs. These findings demonstrate that although toddlers have much to learn about their native languages, they represent language experience in terms of an abstract mental vocabulary. These abstract representations allow children to rapidly detect general patterns in their native language, and thus to learn rules as well as words from the start.",
"title": ""
},
{
"docid": "neg:1840208_7",
"text": "Tasks recognizing named entities such as products, people names, or locations from documents have recently received significant attention in the literature. Many solutions to these tasks assume the existence of reference entity tables. An important challenge that needs to be addressed in the entity extraction task is that of ascertaining whether or not a candidate string approximately matches with a named entity in a given reference table.\n Prior approaches have relied on string-based similarity which only compare a candidate string and an entity it matches with. In this paper, we exploit web search engines in order to define new similarity functions. We then develop efficient techniques to facilitate approximate matching in the context of our proposed similarity functions. In an extensive experimental evaluation, we demonstrate the accuracy and efficiency of our techniques.",
"title": ""
},
{
"docid": "neg:1840208_8",
"text": "In today’s competitive business environment, companies are facing challenges in dealing with big data issues for rapid decision making for improved productivity. Many manufacturing systems are not ready to manage big data due to the lack of smart analytics tools. Germany is leading a transformation toward 4th Generation Industrial Revolution (Industry 4.0) based on Cyber-Physical System based manufacturing and service innovation. As more software and embedded intelligence are integrated in industrial products and systems, predictive technologies can further intertwine intelligent algorithms with electronics and tether-free intelligence to predict product performance degradation and autonomously manage and optimize product service needs. This article addresses the trends of industrial transformation in big data environment as well as the readiness of smart predictive informatics tools to manage big data to achieve transparency and productivity. Keywords—Industry 4.0; Cyber Physical Systems; Prognostics and Health Management; Big Data;",
"title": ""
},
{
"docid": "neg:1840208_9",
"text": "The involvement of smart grid in home and building automation systems has led to the development of diverse standards for interoperable products to control appliances, lighting, energy management and security. Smart grid enables a user to control the energy usage according to the price and demand. These standards have been developed in parallel by different organizations, which are either open or proprietary. It is necessary to arrange these standards in such a way that it is easier for potential readers to easily understand and select a suitable standard according to their functionalities without going into the depth of each standard. In this paper, we review the main smart grid standards proposed by different organizations for home and building automation in terms of different function fields. In addition, we evaluate the scope of interoperability, benefits and drawbacks of the standard.",
"title": ""
},
{
"docid": "neg:1840208_10",
"text": "PURPOSE\nTo evaluate gastroesophageal reflux disease (GERD) symptoms, patient satisfaction, and antisecretory drug use in a large group of GERD patients treated with the Stretta procedure (endoluminal temperature-controlled radiofrequency energy for the treatment of GERD) at multiple centers since February 1999.\n\n\nMETHODS\nAll subjects provided informed consent. A health care provider from each institution administered a standardized GERD survey to patients who had undergone Stretta. Subjects provided (at baseline and follow-up) (1) GERD severity (none, mild, moderate, severe), (2) percentage of GERD symptom control, (3) satisfaction, and (4) antisecretory medication use. Outcomes were compared with the McNemar test, paired t test, and Wilcoxon signed rank test.\n\n\nRESULTS\nSurveys of 558 patients were evaluated (33 institutions, mean follow-up of 8 months). Most patients (76%) were dissatisfied with baseline antisecretory therapy for GERD. After treatment, onset of GERD relief was less than 2 months (68.7%) or 2 to 6 months (14.6%). The median drug requirement improved from proton pump inhibitors twice daily to antacids as needed (P < .0001). The percentage of patients with satisfactory GERD control (absent or mild) improved from 26.3% at baseline (on drugs) to 77.0% after Stretta (P < .0001). Median baseline symptom control on drugs was 50%, compared with 90% at follow-up (P < .0001). Baseline patient satisfaction on drugs was 23.2%, compared with 86.5% at follow-up (P < .0001). Subgroup analysis (<1 year vs. >1 year of follow-up) showed a superior effect on symptom control and drug use in those patients beyond 1 year of follow-up, supporting procedure durability.\n\n\nCONCLUSIONS\nThe Stretta procedure results in significant GERD symptom control and patient satisfaction, superior to that derived from drug therapy in this study group. The treatment effect is durable beyond 1 year, and most patients were off all antisecretory drugs at follow-up. These results support the use of the Stretta procedure for patients with GERD, particularly those with inadequate control of symptoms on medical therapy.",
"title": ""
},
{
"docid": "neg:1840208_11",
"text": "This paper details a series of pre-professional development interventions to assist teachers in utilizing computational thinking and programming as an instructional tool within other subject areas (i.e. music, language arts, mathematics, and science). It describes the lessons utilized in the interventions along with the instruments used to evaluate them, and offers some preliminary findings.",
"title": ""
},
{
"docid": "neg:1840208_12",
"text": "We introduce a new OS abstraction—light-weight contexts (lwCs)—that provides independent units of protection, privilege, and execution state within a process. A process may include several lwCs, each with possibly different views of memory, file descriptors, and access capabilities. lwCs can be used to efficiently implement roll-back (process can return to a prior recorded state), isolated address spaces (lwCs within the process may have different views of memory, e.g., isolating sensitive data from network-facing components or isolating different user sessions), and privilege separation (in-process reference monitors can arbitrate and control access). lwCs can be implemented efficiently: the overhead of a lwC is proportional to the amount of memory exclusive to the lwC; switching lwCs is quicker than switching kernel threads within the same process. We describe the lwC abstraction and API, and an implementation of lwCs within the FreeBSD 11.0 kernel. Finally, we present an evaluation of common usage patterns, including fast rollback, session isolation, sensitive data isolation, and inprocess reference monitoring, using Apache, nginx, PHP, and OpenSSL.",
"title": ""
},
{
"docid": "neg:1840208_13",
"text": "OBJECTIVE\nTo succinctly summarise five contemporary theories about motivation to learn, articulate key intersections and distinctions among these theories, and identify important considerations for future research.\n\n\nRESULTS\nMotivation has been defined as the process whereby goal-directed activities are initiated and sustained. In expectancy-value theory, motivation is a function of the expectation of success and perceived value. Attribution theory focuses on the causal attributions learners create to explain the results of an activity, and classifies these in terms of their locus, stability and controllability. Social- cognitive theory emphasises self-efficacy as the primary driver of motivated action, and also identifies cues that influence future self-efficacy and support self-regulated learning. Goal orientation theory suggests that learners tend to engage in tasks with concerns about mastering the content (mastery goal, arising from a 'growth' mindset regarding intelligence and learning) or about doing better than others or avoiding failure (performance goals, arising from a 'fixed' mindset). Finally, self-determination theory proposes that optimal performance results from actions motivated by intrinsic interests or by extrinsic values that have become integrated and internalised. Satisfying basic psychosocial needs of autonomy, competence and relatedness promotes such motivation. Looking across all five theories, we note recurrent themes of competence, value, attributions, and interactions between individuals and the learning context.\n\n\nCONCLUSIONS\nTo avoid conceptual confusion, and perhaps more importantly to maximise the theory-building potential of their work, researchers must be careful (and precise) in how they define, operationalise and measure different motivational constructs. We suggest that motivation research continue to build theory and extend it to health professions domains, identify key outcomes and outcome measures, and test practical educational applications of the principles thus derived.",
"title": ""
},
{
"docid": "neg:1840208_14",
"text": "Error in medicine is a subject of continuing interest among physicians, patients, policymakers, and the general public. This article examines the issue of disclosure of medical errors in the context of emergency medicine. It reviews the concept of medical error; proposes the professional duty of truthfulness as a justification for error disclosure; examines barriers to error disclosure posed by health care systems, patients, physicians, and the law; suggests system changes to address the issue of medical error; offers practical guidelines to promote the practice of error disclosure; and discusses the issue of disclosure of errors made by another physician.",
"title": ""
},
{
"docid": "neg:1840208_15",
"text": "A central challenge to many fields of science and engineering involves minimizing non-convex error functions over continuous, high dimensional spaces. Gradient descent or quasi-Newton methods are almost ubiquitously used to perform such minimizations, and it is often thought that a main source of difficulty for the ability of these local methods to find the global minimum is the proliferation of local minima with much higher error than the global minimum. Here we argue, based on results from statistical physics, random matrix theory, and neural network theory, that a deeper and more profound difficulty originates from the proliferation of saddle points, not local minima, especially in high dimensional problems of practical interest. Such saddle points are surrounded by high error plateaus that can dramatically slow down learning, and give the illusory impression of the existence of a local minimum. Motivated by these arguments, we propose a new algorithm, the saddle-free Newton method, that can rapidly escape high dimensional saddle points, unlike gradient descent and quasi-Newton methods. We apply this algorithm to deep neural network training, and provide preliminary numerical evidence for its superior performance.",
"title": ""
},
{
"docid": "neg:1840208_16",
"text": "This paper proposes and investigates an offline finite-element-method (FEM)-assisted position and speed observer for brushless dc permanent-magnet (PM) (BLDC-PM) motor drive sensorless control based on the line-to-line PM flux linkage estimation. The zero crossing of the line-to-line PM flux linkage occurs right in the middle of two commutation points (CPs) and is used as a basis for the position and speed observer. The position between CPs is obtained by comparing the estimated line-to-line PM flux with the FEM-calculated line-to-line PM flux. Even if the proposed observer relies on the fundamental model of the machine, a safe starting strategy under heavy load torque, called I-f control, is used, with seamless transition to the proposed sensorless control. The I-f starting method allows low-speed sensorless control, without knowing the initial position and without machine parameter identification. Digital simulations and experimental results are shown, demonstrating the reliability of the FEM-assisted position and speed observer for BLDC-PM motor sensorless control operation.",
"title": ""
},
{
"docid": "neg:1840208_17",
"text": "Distribution patterns along a slope and vertical root distribution were compared among seven major woody species in a secondary forest of the warm-temperate zone in central Japan in relation to differences in soil moisture profiles through a growing season among different positions along the slope. Pinus densiflora, Juniperus rigida, Ilex pedunculosa and Lyonia ovalifolia, growing mostly on the upper part of the slope with shallow soil depth had shallower roots. Quercus serrata and Quercus glauca, occurring mostly on the lower slope with deep soil showed deeper rooting. Styrax japonica, mainly restricted to the foot slope, had shallower roots in spite of growing on the deepest soil. These relations can be explained by the soil moisture profile under drought at each position on the slope. On the upper part of the slope and the foot slope, deep rooting brings little advantage in water uptake from the soil due to the total drying of the soil and no period of drying even in the shallow soil, respectively. However, deep rooting is useful on the lower slope where only the deep soil layer keeps moist. This was supported by better diameter growth of a deep-rooting species on deeper soil sites than on shallower soil sites, although a shallow-rooting species showed little difference between them.",
"title": ""
},
{
"docid": "neg:1840208_18",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
}
] |
1840209 | RIOT : One OS to Rule Them All in the IoT | [
{
"docid": "pos:1840209_0",
"text": "We present TinyOS, a flexible, application-specific operating system for sensor networks. Sensor networks consist of (potentially) thousands of tiny, low-power nodes, each of which execute concurrent, reactive programs that must operate with severe memory and power constraints. The sensor network challenges of limited resources, event-centric concurrent applications, and low-power operation drive the design of TinyOS. Our solution combines flexible, fine-grain components with an execution model that supports complex yet safe concurrent operations. TinyOS meets these challenges well and has become the platform of choice for sensor network research; it is in use by over a hundred groups worldwide, and supports a broad range of applications and research topics. We provide a qualitative and quantitative evaluation of the system, showing that it supports complex, concurrent programs with very low memory requirements (many applications fit within 16KB of memory, and the core OS is 400 bytes) and efficient, low-power operation. We present our experiences with TinyOS as a platform for sensor network innovation and applications.",
"title": ""
},
{
"docid": "pos:1840209_1",
"text": "Many threads packages have been proposed for programming wireless sensor platforms. However, many sensor network operating systems still choose to provide an event-driven model, due to efficiency concerns. We present TOS-Threads, a threads package for TinyOS that combines the ease of a threaded programming model with the efficiency of an event-based kernel. TOSThreads is backwards compatible with existing TinyOS code, supports an evolvable, thread-safe kernel API, and enables flexible application development through dynamic linking and loading. In TOS-Threads, TinyOS code runs at a higher priority than application threads and all kernel operations are invoked only via message passing, never directly, ensuring thread-safety while enabling maximal concurrency. The TOSThreads package is non-invasive; it does not require any large-scale changes to existing TinyOS code.\n We demonstrate that TOSThreads context switches and system calls introduce an overhead of less than 0.92% and that dynamic linking and loading takes as little as 90 ms for a representative sensing application. We compare different programming models built using TOSThreads, including standard C with blocking system calls and a reimplementation of Tenet. Additionally, we demonstrate that TOSThreads is able to run computationally intensive tasks without adversely affecting the timing of critical OS services.",
"title": ""
}
] | [
{
"docid": "neg:1840209_0",
"text": "Communication between the deaf and non-deaf has always been a very cumbersome task. This paper aims to cover the various prevailing methods of deaf-mute communication interpreter system. The two broad classification of the communication methodologies used by the deaf –mute people are Wearable Communication Device and Online Learning System. Under Wearable communication method, there are Glove based system, Keypad method and Handicom Touchscreen. All the above mentioned three sub-divided methods make use of various sensors, accelerometer, a suitable microcontroller, a text to speech conversion module, a keypad and a touch-screen. The need for an external device to interpret the message between a deaf –mute and non-deaf-mute people can be overcome by the second method i.e online learning system. The Online Learning System has different methods under it, five of which are explained in this paper. The five sub-divided methods areSLIM module, TESSA, Wi-See Technology, SWI_PELE System and Web-Sign Technology. The working of the individual components used and the operation of the whole system for the communication purpose has been explained in detail in this paper.",
"title": ""
},
{
"docid": "neg:1840209_1",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "neg:1840209_2",
"text": "Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build. In this work, we propose a fully data-driven approach to abstractive sentence summarization. Our method utilizes a local attention-based model that generates each word of the summary conditioned on the input sentence. While the model is structurally simple, it can easily be trained end-to-end and scales to a large amount of training data. The model shows significant performance gains on the DUC-2004 shared task compared with several strong baselines.",
"title": ""
},
{
"docid": "neg:1840209_3",
"text": "The challenge of developing facial recognition systems has been the focus of many research efforts in recent years and has numerous applications in areas such as security, entertainment, and biometrics. Recently, most progress in this field has come from training very deep neural networks on massive datasets which is computationally intensive and time consuming. Here, we propose a deep transfer learning (DTL) approach that integrates transfer learning techniques and convolutional neural networks and apply it to the problem of facial recognition to fine-tune facial recognition models. Transfer learning can allow for the training of robust, high-performance machine learning models that require much less time and resources to produce than similarly performing models that have been trained from scratch. Using a pre-trained face recognition model, we were able to perform transfer learning to produce a network that is capable of making accurate predictions on much smaller datasets. We also compare our results with results produced by a selection of classical algorithms on the same datasets to demonstrate the effectiveness of the proposed DTL approach.",
"title": ""
},
{
"docid": "neg:1840209_4",
"text": "In this paper, we review some of the novel emerging memory technologies and how they can enable energy-efficient implementation of large neuromorphic computing systems. We will highlight some of the key aspects of biological computation that are being mimicked in these novel nanoscale devices, and discuss various strategies employed to implement them efficiently. Though large scale learning systems have not been implemented using these devices yet, we will discuss the ideal specifications and metrics to be satisfied by these devices based on theoretical estimations and simulations. We also outline the emerging trends and challenges in the path towards successful implementations of large learning systems that could be ubiquitously deployed for a wide variety of cognitive computing tasks.",
"title": ""
},
{
"docid": "neg:1840209_5",
"text": "Image classification is a vital technology many people in all arenas of human life utilize. It is pervasive in every facet of the social, economic, and corporate spheres of influence, worldwide. This need for more accurate, detail-oriented classification increases the need for modifications, adaptations, and innovations to Deep learning algorithms. This paper uses Convolutional Neural Networks (CNN) to classify handwritten digits in the MNIST database, and scenes in the CIFAR-10 database. Our proposed method preprocesses the data in the wavelet domain to attain greater accuracy and comparable efficiency to the spatial domain processing. By separating the image into different subbands, important feature learning occurs over varying low to high frequencies. The fusion of the learned low and high frequency features, and processing the combined feature mapping results in an increase in the detection accuracy. Comparing the proposed methods to spatial domain CNN and Stacked Denoising Autoencoder (SDA), experimental findings reveal a substantial increase in accuracy.",
"title": ""
},
{
"docid": "neg:1840209_6",
"text": "The effects of Arctium lappa L. (root) on anti-inflammatory and free radical scavenger activity were investigated. Subcutaneous administration of A. lappa crude extract significantly decreased carrageenan-induced rat paw edema. When simultaneously treated with CCl4, it produced pronounced activities against CCl4-induced acute liver damage. The free radical scavenging activity of its crude extract was also examined by means of an electron spin resonance (ESR) spectrometer. The IC50 of A. lappa extract on superoxide and hydroxyl radical scavenger activity was 2.06 mg/ml and 11.8 mg/ml, respectively. These findings suggest that Arctium lappa possess free radical scavenging activity. The inhibitory effects on carrageenan-induced paw edema and CCl4-induced hepatotoxicity could be due to the scavenging effect of A. lappa.",
"title": ""
},
{
"docid": "neg:1840209_7",
"text": "Brain-computer interaction has already moved from assistive care to applications such as gaming. Improvements in usability, hardware, signal processing, and system integration should yield applications in other nonmedical areas.",
"title": ""
},
{
"docid": "neg:1840209_8",
"text": "Though many tools are available to help programmers working on change tasks, and several studies have been conducted to understand how programmers comprehend systems, little is known about the specific kinds of questions programmers ask when evolving a code base. To fill this gap we conducted two qualitative studies of programmers performing change tasks to medium to large sized programs. One study involved newcomers working on assigned change tasks to a medium-sized code base. The other study involved industrial programmers working on their own change tasks on code with which they had experience. The focus of our analysis has been on what information a programmer needs to know about a code base while performing a change task and also on howthey go about discovering that information. Based on this analysis we catalog and categorize 44 different kinds of questions asked by our participants. We also describe important context for how those questions were answered by our participants, including their use of tools.",
"title": ""
},
{
"docid": "neg:1840209_9",
"text": "Schizophrenia-spectrum risk alleles may persist in the population, despite their reproductive costs in individuals with schizophrenia, through the possible creativity benefits of mild schizotypy in non-psychotic relatives. To assess this creativity-benefit model, we measured creativity (using 6 verbal and 8 drawing tasks), schizotypy, Big Five personality traits, and general intelligence in 225 University of New Mexico students. Multiple regression analyses showed that openness and intelligence, but not schizotypy, predicted reliable observer ratings of verbal and drawing creativity. Thus, the 'madness-creativity' link seems mediated by the personality trait of openness, and standard creativity-benefit models seem unlikely to explain schizophrenia's evolutionary persistence.",
"title": ""
},
{
"docid": "neg:1840209_10",
"text": "This paper proposes LPRNet end-to-end method for Automatic License Plate Recognition without preliminary character segmentation. Our approach is inspired by recent breakthroughs in Deep Neural Networks, and works in real-time with recognition accuracy up to 95% for Chinese license plates: 3 ms/plate on nVIDIA R © GeForceTMGTX 1080 and 1.3 ms/plate on Intel R © CoreTMi7-6700K CPU. LPRNet consists of the lightweight Convolutional Neural Network, so it can be trained in end-to-end way. To the best of our knowledge, LPRNet is the first real-time License Plate Recognition system that does not use RNNs. As a result, the LPRNet algorithm may be used to create embedded solutions for LPR that feature high level accuracy even on challenging Chinese license plates.",
"title": ""
},
{
"docid": "neg:1840209_11",
"text": "The choice of a business process modelling (BPM) tool in combination with the selection of a modelling language is one of the crucial steps in BPM project preparation. Different aspects influence the decision: tool functionality, price, modelling language support, etc. In this paper we discuss the aspect of usability, which has already been recognized as an important topic in software engineering and web design. We conduct a literature review to find out the current state of research on the usability in the BPM field. The results of the literature review show, that although a number of research papers mention the importance of usability for BPM tools, real usability evaluation studies have rarely been undertaken. Based on the results of the literature analysis, the possible research directions in the field of usability of BPM tools are suggested.",
"title": ""
},
{
"docid": "neg:1840209_12",
"text": "We present a compendium of recent and current projects that utilize crowdsourcing technologies for language studies, finding that the quality is comparable to controlled laboratory experiments, and in some cases superior. While crowdsourcing has primarily been used for annotation in recent language studies, the results here demonstrate that far richer data may be generated in a range of linguistic disciplines from semantics to psycholinguistics. For these, we report a number of successful methods for evaluating data quality in the absence of a ‘correct’ response for any given data point.",
"title": ""
},
{
"docid": "neg:1840209_13",
"text": "The Audio/Visual Emotion Challenge and Workshop (AVEC 2017) \"Real-life depression, and affect\" will be the seventh competition event aimed at comparison of multimedia processing and machine learning methods for automatic audiovisual depression and emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for multimodal information processing and to bring together the depression and emotion recognition communities, as well as the audiovisual processing communities, to compare the relative merits of the various approaches to depression and emotion recognition from real-life data. This paper presents the novelties introduced this year, the challenge guidelines, the data used, and the performance of the baseline system on the two proposed tasks: dimensional emotion recognition (time and value-continuous), and dimensional depression estimation (value-continuous).",
"title": ""
},
{
"docid": "neg:1840209_14",
"text": "Mango cultivation methods being adopted currently are ineffective and low productive despite consuming huge man power. Advancements in robust unmanned aerial vehicles (UAV's), high speed image processing algorithms and machine vision techniques, reinforce the possibility of transforming agricultural scenario to modernity within prevailing time and energy constraints. Present paper introduces Agricultural Aid for Mango cutting (AAM), an Agribot that could be employed for precision mango farming. It is a quadcopter empowered with vision and cutter systems complemented with necessary ancillaries. It could hover around the trees, detect the ripe mangoes, cut and collect them. Paper also sheds light on the available Agribots that have mostly been limited to the research labs. AAM robot is the first of its kind that once implemented could pave way to the next generation Agribots capable of increasing the agricultural productivity and justify the existence of intelligent machines.",
"title": ""
},
{
"docid": "neg:1840209_15",
"text": "Segmentation and classification of urban range data into different object classes have several challenges due to certain properties of the data, such as density variation, inconsistencies due to missing data and the large data size that require heavy computation and large memory. A method to classify urban scenes based on a super-voxel segmentation of sparse 3D data obtained from LiDAR sensors is presented. The 3D point cloud is first segmented into voxels, which are then characterized by several attributes transforming them into super-voxels. These are joined together by using a link-chain method rather than the usual region growing algorithm to create objects. These objects are then classified using geometrical models and local descriptors. In order to evaluate the results, a new metric that combines both segmentation and classification results simultaneously is presented. The effects of voxel size and incorporation of RGB color and laser reflectance intensity on the classification results are also discussed. The method is evaluated on standard data sets using different metrics to demonstrate its efficacy.",
"title": ""
},
{
"docid": "neg:1840209_16",
"text": "In-band full-duplex (FD) wireless communication, i.e. simultaneous transmission and reception at the same frequency, in the same channel, promises up to 2x spectral efficiency, along with advantages in higher network layers [1]. the main challenge is dealing with strong in-band leakage from the transmitter to the receiver (i.e. self-interference (SI)), as TX powers are typically >100dB stronger than the weakest signal to be received, necessitating TX-RX isolation and SI cancellation. Performing this SI-cancellation solely in the digital domain, if at all possible, would require extremely clean (low-EVM) transmission and a huge dynamic range in the RX and ADC, which is currently not feasible [2]. Cancelling SI entirely in analog is not feasible either, since the SI contains delayed TX components reflected by the environment. Cancelling these requires impractically large amounts of tunable analog delay. Hence, FD-solutions proposed thus far combine SI-rejection at RF, analog BB, digital BB and cross-domain.",
"title": ""
},
{
"docid": "neg:1840209_17",
"text": "BACKGROUND\nThe frontal branch has a defined course along the Pitanguy line from tragus to lateral brow, although its depth along this line is controversial. The high-superficial musculoaponeurotic system (SMAS) face-lift technique divides the SMAS above the arch, which conflicts with previous descriptions of the frontal nerve depth. This anatomical study defines the depth and fascial boundaries of the frontal branch of the facial nerve over the zygomatic arch.\n\n\nMETHODS\nEight fresh cadaver heads were included in the study, with bilateral facial nerves studied (n = 16). The proximal frontal branches were isolated and then sectioned in full-thickness tissue blocks over a 5-cm distance over the zygomatic arch. The tissue blocks were evaluated histologically for the depth and fascial planes surrounding the frontal nerve. A dissection video accompanies this article.\n\n\nRESULTS\nThe frontal branch of the facial nerve was identified in each tissue section and its fascial boundaries were easily identified using epidermis and periosteum as reference points. The frontal branch coursed under a separate fascial plane, the parotid-temporal fascia, which was deep to the SMAS as it coursed to the zygomatic arch and remained within this deep fascia over the arch. The frontal branch was intact and protected by the parotid-temporal fascia after a high-SMAS face lift.\n\n\nCONCLUSIONS\nThe frontal branch of the facial nerve is protected by a deep layer of fascia, termed the parotid-temporal fascia, which is separate from the SMAS as it travels over the zygomatic arch. Division of the SMAS above the arch in a high-SMAS face lift is safe using the technique described in this study.",
"title": ""
},
{
"docid": "neg:1840209_18",
"text": "In this paper, a new optimization approach is designed for convolutional neural network (CNN) which introduces explicit logical relations between filters in the convolutional layer. In a conventional CNN, the filters’ weights in convolutional layers are separately trained by their own residual errors, and the relations of these filters are not explored for learning. Different from the traditional learning mechanism, the proposed correlative filters (CFs) are initiated and trained jointly in accordance with predefined correlations, which are efficient to work cooperatively and finally make a more generalized optical system. The improvement in CNN performance with the proposed CF is verified on five benchmark image classification datasets, including CIFAR-10, CIFAR-100, MNIST, STL-10, and street view house number. The comparative experimental results demonstrate that the proposed approach outperforms a number of state-of-the-art CNN approaches.",
"title": ""
},
{
"docid": "neg:1840209_19",
"text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.",
"title": ""
}
] |
1840210 | CamBP: a camera-based, non-contact blood pressure monitor | [
{
"docid": "pos:1840210_0",
"text": "Plethysmographic signals were measured remotely (> 1m) using ambient light and a simple consumer level digital camera in movie mode. Heart and respiration rates could be quantified up to several harmonics. Although the green channel featuring the strongest plethysmographic signal, corresponding to an absorption peak by (oxy-) hemoglobin, the red and blue channels also contained plethysmographic information. The results show that ambient light photo-plethysmography may be useful for medical purposes such as characterization of vascular skin lesions (e.g., port wine stains) and remote sensing of vital signs (e.g., heart and respiration rates) for triage or sports purposes.",
"title": ""
}
] | [
{
"docid": "neg:1840210_0",
"text": "When information is abundant, it becomes increasingly difficult to fit nuggets of knowledge into a single coherent picture. Complex stories spaghetti into branches, side stories, and intertwining narratives. In order to explore these stories, one needs a map to navigate unfamiliar territory. We propose a methodology for creating structured summaries of information, which we call metro maps. Our proposed algorithm generates a concise structured set of documents maximizing coverage of salient pieces of information. Most importantly, metro maps explicitly show the relations among retrieved pieces in a way that captures story development. We first formalize characteristics of good maps and formulate their construction as an optimization problem. Then we provide efficient methods with theoretical guarantees for generating maps. Finally, we integrate user interaction into our framework, allowing users to alter the maps to better reflect their interests. Pilot user studies with a real-world dataset demonstrate that the method is able to produce maps which help users acquire knowledge efficiently.",
"title": ""
},
{
"docid": "neg:1840210_1",
"text": "This paper studies asset allocation decisions in the presence of regime switching in asset returns. We find evidence that four separate regimes characterized as crash, slow growth, bull and recovery states are required to capture the joint distribution of stock and bond returns. Optimal asset allocations vary considerably across these states and change over time as investors revise their estimates of the state probabilities. In the crash state, buy-and-hold investors allocate more of their portfolio to stocks the longer their investment horizon, while the optimal allocation to stocks declines as a function of the investment horizon in bull markets. The joint effects of learning about state probabilities and predictability of asset returns from the dividend yield give rise to a non-monotonic relationship between the investment horizon and the demand for stocks. Welfare costs from ignoring regime switching can be substantial even after accounting for parameter uncertainty. Out-of-sample forecasting experiments confirm the economic importance of accounting for the presence of regimes in asset returns.",
"title": ""
},
{
"docid": "neg:1840210_2",
"text": "Shape memory alloy (SMA) actuators, which have ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Although the number of applications is increasing, there has been limited success in precise motion control since the systems are disturbed by unknown factors beside their inherent nonlinear hysteresis or the surrounding environment of the systems is changed. This paper presents a new development of SMA position control system by using self-tuning fuzzy PID controller. The use of this control algorithm is to tune the parameters of the PID controller by integrating fuzzy inference and producing a fuzzy adaptive PID controller that can be used to improve the control performance of nonlinear systems. The experimental results of position control of SMA actuators using conventional and self tuning fuzzy PID controller are both included in this paper",
"title": ""
},
{
"docid": "neg:1840210_3",
"text": "Recently, machine learning is widely used in applications and cloud services. And as the emerging field of machine learning, deep learning shows excellent ability in solving complex learning problems. To give users better experience, high performance implementations of deep learning applications seem very important. As a common means to accelerate algorithms, FPGA has high performance, low power consumption, small size and other characteristics. So we use FPGA to design a deep learning accelerator, the accelerator focuses on the implementation of the prediction process, data access optimization and pipeline structure. Compared with Core 2 CPU 2.3GHz, our accelerator can achieve promising result.",
"title": ""
},
{
"docid": "neg:1840210_4",
"text": "Supplier selection is nowadays one of the critical topics in supply chain management. This paper presents a new decision making approach for group multi-criteria supplier selection problem, which clubs supplier selection process with order allocation for dynamic supply chains to cope market variations. More specifically, the developed approach imitates the knowledge acquisition and manipulation in a manner similar to the decision makers who have gathered considerable knowledge and expertise in procurement domain. Nevertheless, under many conditions, exact data are inadequate to model real-life situation and fuzzy logic can be incorporated to handle the vagueness of the decision makers. As per this concept, fuzzy-AHP method is used first for supplier selection through four classes (CLASS I: Performance strategy, CLASS II: Quality of service, CLASS III: Innovation and CLASS IV: Risk), which are qualitatively meaningful. Thereafter, using simulation based fuzzy TOPSIS technique, the criteria application is quantitatively evaluated for order allocation among the selected suppliers. As a result, the approach generates decision-making knowledge, and thereafter, the developed combination of rules order allocation can easily be interpreted, adopted and at the same time if necessary, modified by decision makers. To demonstrate the applicability of the proposed approach, an illustrative example is presented and the results analyzed. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840210_5",
"text": "We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of scores for each map. Standard CNN architectures are modified through the incorporation of this module, and trained under the constraint that a convex combination of the intermediate 2D feature vectors, as parameterised by the score matrices, must alone be used for classification. Incentivised to amplify the relevant and suppress the irrelevant or misleading, the scores thus assume the role of attention values. Our experimental observations provide clear evidence to this effect: the learned attention maps neatly highlight the regions of interest while suppressing background clutter. Consequently, the proposed function is able to bootstrap standard CNN architectures for the task of image classification, demonstrating superior generalisation over 6 unseen benchmark datasets. When binarised, our attention maps outperform other CNN-based attention maps, traditional saliency maps, and top object proposals for weakly supervised segmentation as demonstrated on the Object Discovery dataset. We also demonstrate improved robustness against the fast gradient sign method of adversarial attack.",
"title": ""
},
{
"docid": "neg:1840210_6",
"text": "Big data refers to data volumes in the range of exabytes (1018) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate that is rapidly approaching the exabyte/year range. But, its creation and aggregation are accelerating and will approach the zettabyte/year range within a few years. Volume is only one aspect of big data; other attributes are variety, velocity, value, and complexity. Storage and data transport are technology issues, which seem to be solvable in the near-term, but represent longterm challenges that require research and new paradigms. We analyze the issues and challenges as we begin a collaborative research program into methodologies for big data analysis and design.",
"title": ""
},
{
"docid": "neg:1840210_7",
"text": "We describe the first mobile app for identifying plant species using automatic visual recognition. The system – called Leafsnap – identifies tree species from photographs of their leaves. Key to this system are computer vision components for discarding non-leaf images, segmenting the leaf from an untextured background, extracting features representing the curvature of the leaf’s contour over multiple scales, and identifying the species from a dataset of the 184 trees in the Northeastern United States. Our system obtains state-of-the-art performance on the real-world images from the new Leafsnap Dataset – the largest of its kind. Throughout the paper, we document many of the practical steps needed to produce a computer vision system such as ours, which currently has nearly a million users.",
"title": ""
},
{
"docid": "neg:1840210_8",
"text": "We demonstrate how to learn efficient heuristics for automated reasoning algorithms through deep reinforcement learning. We consider search algorithms for quantified Boolean logics, that already can solve formulas of impressive size up to 100s of thousands of variables. The main challenge is to find a representation which lends to making predictions in a scalable way. The heuristics learned through our approach significantly improve over the handwritten heuristics for several sets of formulas.",
"title": ""
},
{
"docid": "neg:1840210_9",
"text": "Research in signal processing shows that a variety of transforms have been introduced to map the data from the original space into the feature space, in order to efficiently analyze a signal. These techniques differ in their basis functions, that is used for projecting the signal into a higher dimensional space. One of the widely used schemes for quasi-stationary and non-stationary signals is the time-frequency (TF) transforms, characterized by specific kernel functions. This work introduces a novel class of Ramanujan Fourier Transform (RFT) based TF transform functions, constituted by Ramanujan sums (RS) basis. The proposed special class of transforms offer high immunity to noise interference, since the computation is carried out only on co-resonant components, during analysis of signals. Further, we also provide a 2-D formulation of the RFT function. Experimental validation using synthetic examples, indicates that this technique shows potential for obtaining relatively sparse TF-equivalent representation and can be optimized for characterization of certain real-life signals.",
"title": ""
},
{
"docid": "neg:1840210_10",
"text": "Mobile phones are becoming more and more widely used nowadays, and people do not use the phone only for communication: there is a wide variety of phone applications allowing users to select those that fit their needs. Aggregated over time, application usage patterns exhibit not only what people are consistently interested in but also the way in which they use their phones, and can help improving phone design and personalized services. This work aims at mining automatically usage patterns from apps data recorded continuously with smartphones. A new probabilistic framework for mining usage patterns is proposed. Our methodology involves the design of a bag-of-apps model that robustly represents level of phone usage over specific times of the day, and the use of a probabilistic topic model that jointly discovers patterns of usage over multiple applications and describes users as mixtures of such patterns. Our framework is evaluated using 230 000+ hours of real-life app phone log data, demonstrates that relevant patterns of usage can be extracted, and is objectively validated on a user retrieval task with competitive performance.",
"title": ""
},
{
"docid": "neg:1840210_11",
"text": "Business intelligence and analytics (BIA) is about the development of technologies, systems, practices, and applications to analyze critical business data so as to gain new insights about business and markets. The new insights can be used for improving products and services, achieving better operational efficiency, and fostering customer relationships. In this article, we will categorize BIA research activities into three broad research directions: (a) big data analytics, (b) text analytics, and (c) network analytics. The article aims to review the state-of-the-art techniques and models and to summarize their use in BIA applications. For each research direction, we will also determine a few important questions to be addressed in future research.",
"title": ""
},
{
"docid": "neg:1840210_12",
"text": "INTRODUCTION\nArtificial intelligence is a branch of computer science capable of analysing complex medical data. Their potential to exploit meaningful relationship with in a data set can be used in the diagnosis, treatment and predicting outcome in many clinical scenarios.\n\n\nMETHODS\nMedline and internet searches were carried out using the keywords 'artificial intelligence' and 'neural networks (computer)'. Further references were obtained by cross-referencing from key articles. An overview of different artificial intelligent techniques is presented in this paper along with the review of important clinical applications.\n\n\nRESULTS\nThe proficiency of artificial intelligent techniques has been explored in almost every field of medicine. Artificial neural network was the most commonly used analytical tool whilst other artificial intelligent techniques such as fuzzy expert systems, evolutionary computation and hybrid intelligent systems have all been used in different clinical settings.\n\n\nDISCUSSION\nArtificial intelligence techniques have the potential to be applied in almost every field of medicine. There is need for further clinical trials which are appropriately designed before these emergent techniques find application in the real clinical setting.",
"title": ""
},
{
"docid": "neg:1840210_13",
"text": "The composition and activity of the gut microbiota codevelop with the host from birth and is subject to a complex interplay that depends on the host genome, nutrition, and life-style. The gut microbiota is involved in the regulation of multiple host metabolic pathways, giving rise to interactive host-microbiota metabolic, signaling, and immune-inflammatory axes that physiologically connect the gut, liver, muscle, and brain. A deeper understanding of these axes is a prerequisite for optimizing therapeutic strategies to manipulate the gut microbiota to combat disease and improve health.",
"title": ""
},
{
"docid": "neg:1840210_14",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "neg:1840210_15",
"text": "The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.",
"title": ""
},
{
"docid": "neg:1840210_16",
"text": "Web applications have become a very popular means of developing software. This is because of many advantages of web applications like no need of installation on each client machine, centralized data, reduction in business cost etc. With the increase in this trend web applications are becoming vulnerable for attacks. Cross site scripting (XSS) is the major threat for web application as it is the most basic attack on web application. It provides the surface for other types of attacks like Cross Site Request Forgery, Session Hijacking etc. There are three types of XSS attacks i.e. non-persistent (or reflected) XSS, persistent (or stored) XSS and DOM-based vulnerabilities. There is one more type that is not as common as those three types, induced XSS. In this work we aim to study and consolidate the understanding of XSS and their origin, manifestation, kinds of dangers and mitigation efforts for XSS. Different approaches proposed by researchers are presented here and an analysis of these approaches is performed. Finally the conclusion is drawn at the end of the work.",
"title": ""
},
{
"docid": "neg:1840210_17",
"text": "Solar energy is an abundant renewable energy source (RES) which is available without any price from the Sun to the earth. It can be a good alternative of energy source in place of non-renewable sources (NRES) of energy like as fossil fuels and petroleum articles. Sun light can be utilized through solar cells which fulfills the need of energy of the utilizer instead of energy generation by NRES. The development of solar cells has crossed by a number of modifications from one age to another. The cost and efficiency of solar cells are the obstacles in the advancement. In order to select suitable solar photovoltaic (PV) cells for a particular area, operators are needed to sense the basic mechanisms and topologies of diverse solar PV with maximum power point tracking (MPPT) methodologies that are checked to a great degree. In this article, authors reviewed and analyzed a successive growth in the solar PV cell research from one decade to other, and explained about their coming fashions and behaviors. This article also attempts to emphasize on many experiments and technologies to contribute the perks of solar energy.",
"title": ""
},
{
"docid": "neg:1840210_18",
"text": "The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker’s “radiation problem”), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks.",
"title": ""
},
{
"docid": "neg:1840210_19",
"text": "Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor \"whether actual forwarding behaviors are complying with network configurations\". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with headers to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane.",
"title": ""
}
] |
1840211 | An effective and fast iris recognition system based on a combined multiscale feature extraction technique | [
{
"docid": "pos:1840211_0",
"text": "AbstructA method for rapid visual recognition of personal identity is described, based on the failure of a statistical test of independence. The most unique phenotypic feature visible in a person’s face is the detailed texture of each eye’s iris: An estimate of its statistical complexity in a sample of the human population reveals variation corresponding to several hundred independent degrees-of-freedom. Morphogenetic randomness in the texture expressed phenotypically in the iris trabecular meshwork ensures that a test of statistical independence on two coded patterns originating from different eyes is passed almost certainly, whereas the same test is failed almost certainly when the compared codes originate from the same eye. The visible texture of a person’s iris in a real-time video image is encoded into a compact sequence of multi-scale quadrature 2-D Gabor wavelet coefficients, whose most-significant bits comprise a 256-byte “iris code.” Statistical decision theory generates identification decisions from ExclusiveOR comparisons of complete iris codes at the rate of 4000 per second, including calculation of decision confidence levels. The distributions observed empirically in such comparisons imply a theoretical “cross-over” error rate of one in 131000 when a decision criterion is adopted that would equalize the false accept and false reject error rates. In the typical recognition case, given the mean observed degree of iris code agreement, the decision confidence levels correspond formally to a conditional false accept probability of one in about lo”’.",
"title": ""
},
{
"docid": "pos:1840211_1",
"text": "With an increasing emphasis on security, automated personal identification based on biometrics has been receiving extensive attention over the past decade. Iris recognition, as an emerging biometric recognition approach, is becoming a very active topic in both research and practical applications. In general, a typical iris recognition system includes iris imaging, iris liveness detection, and recognition. This paper focuses on the last issue and describes a new scheme for iris recognition from an image sequence. We first assess the quality of each image in the input sequence and select a clear iris image from such a sequence for subsequent recognition. A bank of spatial filters, whose kernels are suitable for iris recognition, is then used to capture local characteristics of the iris so as to produce discriminating texture features. Experimental results show that the proposed method has an encouraging performance. In particular, a comparative study of existing methods for iris recognition is conducted on an iris image database including 2,255 sequences from 213 subjects. Conclusions based on such a comparison using a nonparametric statistical method (the bootstrap) provide useful information for further research.",
"title": ""
}
] | [
{
"docid": "neg:1840211_0",
"text": "Hierarchical text categorization (HTC) refers to assigning a text document to one or more most suitable categories from a hierarchical category space. In this paper we present two HTC techniques based on kNN and SVM machine learning techniques for categorization process and byte n-gram based document representation. They are fully language independent and do not require any text preprocessing steps, or any prior information about document content or language. The effectiveness of the presented techniques and their language independence are demonstrated in experiments performed on five tree-structured benchmark category hierarchies that differ in many aspects: Reuters-Hier1, Reuters-Hier2, 15NGHier and 20NGHier in English and TanCorpHier in Chinese. The results obtained are compared with the corresponding flat categorization techniques applied to leaf level categories of the considered hierarchies. While kNN-based flat text categorization produced slightly better results than kNN-based HTC on the largest TanCorpHier and 20NGHier datasets, SVM-based HTC results do not considerably differ from the corresponding flat techniques, due to shallow hierarchies; still, they outperform both kNN-based flat and hierarchical categorization on all corpora except the smallest Reuters-Hier1 and Reuters-Hier2 datasets. Formal evaluation confirmed that the proposed techniques obtained state-of-the-art results.",
"title": ""
},
{
"docid": "neg:1840211_1",
"text": "With more and more functions in modern battery-powered mobile devices, enabling light-harvesting in the power management system can extend battery usage time [1]. For both indoor and outdoor operations of mobile devices, the output power range of the solar panel with the size of a touchscreen can vary from 100s of µW to a Watt due to the irradiance-level variation. An energy harvester is thus essential to achieve high maximum power-point tracking efficiency (ηT) over this wide power range. However, state-of-the-art energy harvesters only use one maximum power-point tracking (MPPT) method under different irradiance levels as shown in Fig. 22.5.1 [2–5]. Those energy harvesters with power-computation-based MPPT schemes for portable [2,3] and standalone [4] systems suffer from low ηT under low input power due to the limited input dynamic range of the MPPT circuitry. Other low-power energy harvesters with the fractional open-cell voltage (FOCV) MPPT scheme are confined by the fractional-constant accuracy to only offer high ηT across a narrow power range [5]. Additionally, the conventional FOCV MPPT scheme requires long transient time of 250ms to identify MPP [5], thereby significantly reducing energy capture from the solar panel. To address the above issues, this paper presents an energy harvester with an irradiance-aware hybrid algorithm (IAHA) to automatically switch between an auto-zeroed pulse-integration based MPPT (AZ PI-MPPT) and a slew-rate-enhanced FOCV (SRE-FOCV) MPPT scheme for maximizing ηT under different irradiance levels. The SRE-FOCV MPPT scheme also enables the energy harvester to shorten the MPPT transient time to 2.9ms in low irradiance levels.",
"title": ""
},
{
"docid": "neg:1840211_2",
"text": "Accurate segmentation of anatomical structures in chest radiographs is essential for many computer-aided diagnosis tasks. In this paper we investigate the latest fully-convolutional architectures for the task of multi-class segmentation of the lungs field, heart and clavicles in a chest radiograph. In addition, we explore the influence of using different loss functions in the training process of a neural network for semantic segmentation. We evaluate all models on a common benchmark of 247 X-ray images from the JSRT database and ground-truth segmentation masks from the SCR dataset. Our best performing architecture, is a modified U-Net that benefits from pre-trained encoder weights. This model outperformed the current state-of-the-art methods tested on the same benchmark, with Jaccard overlap scores of 96.1% for lung fields, 90.6% for heart and 85.5% for clavicles.",
"title": ""
},
{
"docid": "neg:1840211_3",
"text": "There are various undergoing efforts by system operators to set up an electricity market at the distribution level to enable a rapid and widespread deployment of distributed energy resources (DERs) and microgrids. This paper follows the previous work of the authors in implementing the distribution market operator (DMO) concept, and focuses on investigating the clearing and settlement processes performed by the DMO. The DMO clears the market to assign the awarded power from the wholesale market to customers within its service territory based on their associated demand bids. The DMO accordingly settles the market to identify the distribution locational marginal prices (DLMPs) and calculate payments from each customer and the total payment to the system operator. Numerical simulations exhibit the merits and effectiveness of the proposed DMO clearing and settlement processes.",
"title": ""
},
{
"docid": "neg:1840211_4",
"text": "To address this shortcoming, we propose a method for training binary neural networks with a mixture of bits, yielding effectively fractional bitwidths. We demonstrate that our method is not only effective in allowing finer tuning of the speed to accuracy trade-off, but also has inherent representational advantages. Middle-Out Algorithm Heterogeneous Bitwidth Binarization in Convolutional Neural Networks",
"title": ""
},
{
"docid": "neg:1840211_5",
"text": "The amygdala receives cortical inputs from the medial prefrontal cortex (mPFC) and orbitofrontal cortex (OFC) that are believed to affect emotional control and cue-outcome contingencies, respectively. Although mPFC impact on the amygdala has been studied, how the OFC modulates mPFC-amygdala information flow, specifically the infralimbic (IL) division of mPFC, is largely unknown. In this study, combined in vivo extracellular single-unit recordings and pharmacological manipulations were used in anesthetized rats to examine how OFC modulates amygdala neurons responsive to mPFC activation. Compared with basal condition, pharmacological (N-Methyl-D-aspartate) or electrical activation of the OFC exerted an inhibitory modulation of the mPFC-amygdala pathway, which was reversed with intra-amygdala blockade of GABAergic receptors with combined GABAA and GABAB antagonists (bicuculline and saclofen). Moreover, potentiation of the OFC-related pathways resulted in a loss of OFC control over the mPFC-amygdala pathway. These results show that the OFC potently inhibits mPFC drive of the amygdala in a GABA-dependent manner; but with extended OFC pathway activation this modulation is lost. Our results provide a circuit-level basis for this interaction at the level of the amygdala, which would be critical in understanding the normal and pathophysiological control of emotion and contingency associations regulating behavior.",
"title": ""
},
{
"docid": "neg:1840211_6",
"text": "Emotion has been investigated from various perspectives and across several domains within human computer interaction (HCI) including intelligent tutoring systems, interactive web applications, social media and human-robot interaction. One of the most promising and, nevertheless, challenging applications of affective computing (AC) research is within computer games. This chapter focuses on the study of emotion in the computer games domain, reviews seminal work at the crossroads of game technology, game design and affective computing and details the key phases for efficient affectbased interaction in games.",
"title": ""
},
{
"docid": "neg:1840211_7",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "neg:1840211_8",
"text": "BACKGROUND\nAerobic endurance exercise has been shown to improve higher cognitive functions such as executive control in healthy subjects. We tested the hypothesis that a 30-minute individually customized endurance exercise program has the potential to enhance executive functions in patients with major depressive disorder.\n\n\nMETHOD\nIn a randomized within-subject study design, 24 patients with DSM-IV major depressive disorder and 10 healthy control subjects performed 30 minutes of aerobic endurance exercise at 2 different workload levels of 40% and 60% of their predetermined individual 4-mmol/L lactic acid exercise capacity. They were then tested with 4 standardized computerized neuropsychological paradigms measuring executive control functions: the task switch paradigm, flanker task, Stroop task, and GoNogo task. Performance was measured by reaction time. Data were gathered between fall 2000 and spring 2002.\n\n\nRESULTS\nWhile there were no significant exercise-dependent alterations in reaction time in the control group, for depressive patients we observed a significant decrease in mean reaction time for the congruent Stroop task condition at the 60% energy level (p = .016), for the incongruent Stroop task condition at the 40% energy level (p = .02), and for the GoNogo task at both energy levels (40%, p = .025; 60%, p = .048). The exercise procedures had no significant effect on reaction time in the task switch paradigm or the flanker task.\n\n\nCONCLUSION\nA single 30-minute aerobic endurance exercise program performed by depressed patients has positive effects on executive control processes that appear to be specifically subserved by the anterior cingulate.",
"title": ""
},
{
"docid": "neg:1840211_9",
"text": "We report a case of an 8-month-old child with a primitive myxoid mesenchymal tumor of infancy arising in the thenar eminence. The lesion recurred after conservative excision and was ultimately nonresponsive to chemotherapy, necessitating partial amputation. The patient remains free of disease 5 years after this radical surgery. This is the 1st report of such a tumor since it was initially described by Alaggio and colleagues in 2006. The pathologic differential diagnosis is discussed.",
"title": ""
},
{
"docid": "neg:1840211_10",
"text": "To ensure system stability and availability during disturbances, industrial facilities equipped with on-site generation, generally utilize some type of load shedding scheme. In recent years, conventional underfrequency and PLC-based load shedding schemes have been integrated with computerized power management systems to provide an “automated” load shedding system. However, these automated solutions lack system operating knowledge and are still best-guess methods which typically result in excessive or insufficient load shedding. An intelligent load shedding system can provide faster and optimal load relief by utilizing actual operating conditions and knowledge of past system disturbances. This paper presents the need for an intelligent, automated load shedding system. Simulation of case studies for two industrial electrical networks are performed to demonstrate the advantages of an intelligent load shedding system over conventional load shedding methods from the design and operation perspectives. Index Terms — Load Shedding (LS), Intelligent Load Shedding (ILS), Power System Transient Stability, Frequency Relay, Programmable Logic Controller (PLC), Power Management System",
"title": ""
},
{
"docid": "neg:1840211_11",
"text": "Error correction codes provides a mean to detect and correct errors introduced by the transmission channel. This paper presents a high-speed parallel cyclic redundancy check (CRC) implementation based on unfolding, pipelining, and retiming algorithms. CRC architectures are first pipelined to reduce the iteration bound by using novel look-ahead pipelining methods and then unfolded and retimed to design high-speed parallel circuits. The study and implementation using Verilog HDL. Modelsim Xilinx Edition (MXE) will be used for simulation and functional verification. Xilinx ISE will be used for synthesis and bit file generation. The Xilinx Chip scope will be used to test the results on Spartan 3E",
"title": ""
},
{
"docid": "neg:1840211_12",
"text": "Most mobile apps today require access to remote services, and many of them also require users to be authenticated in order to use their services. To ensure the security between the client app and the remote service, app developers often use cryptographic mechanisms such as encryption (e.g., HTTPS), hashing (e.g., MD5, SHA1), and signing (e.g., HMAC) to ensure the confidentiality and integrity of the network messages. However, these cryptographic mechanisms can only protect the communication security, and server-side checks are still needed because malicious clients owned by attackers can generate any messages they wish. As a result, incorrect or missing server side checks can lead to severe security vulnerabilities including password brute-forcing, leaked password probing, and security access token hijacking. To demonstrate such a threat, we present AUTOFORGE, a tool that can automatically forge valid request messages from the client side to test whether the server side of an app has ensured the security of user accounts with sufficient checks. To enable these security tests, a fundamental challenge lies in how to forge a valid cryptographically consistent message such that it can be consumed by the server. We have addressed this challenge with a set of systematic techniques, and applied them to test the server side implementation of 76 popular mobile apps (each of which has over 1,000,000 installs). Our experimental results show that among these apps, 65 (86%) of their servers are vulnerable to password brute-forcing attacks, all (100%) are vulnerable to leaked password probing attacks, and 9 (12%) are vulnerable to Facebook access token hijacking attacks.",
"title": ""
},
{
"docid": "neg:1840211_13",
"text": "Online health communities and support groups are a valuable source of information for users suffering from a physical or mental illness. Users turn to these forums for moral support or advice on specific conditions, symptoms, or side effects of medications. This paper describes and studies the linguistic patterns of a community of support forum users over time focused on the used of anxious related words. We introduce a methodology to identify groups of individuals exhibiting linguistic patterns associated with anxiety and the correlations between this linguistic pattern and other word usage. We find some evidence that participation in these groups does yield positive effects on their users by reducing the frequency of anxious related word used over time.",
"title": ""
},
{
"docid": "neg:1840211_14",
"text": "Place similarity has a central role in geographic information retrieval and geographic information systems, where spatial proximity is frequently just a poor substitute for semantic relatedness. For applications such as toponym disambiguation, alternative measures are thus required to answer the non-trivial question of place similarity in a given context. In this paper, we discuss a novel approach to the construction of a network of locations from unstructured text data. By deriving similarity scores based on the textual distance of toponyms, we obtain a kind of relatedness that encodes the importance of the co-occurrences of place mentions. Based on the text of the English Wikipedia, we construct and provide such a network of place similarities, including entity linking to Wikidata as an augmentation of the contained information. In an analysis of centrality, we explore the networks capability of capturing the similarity between places. An evaluation of the network for the task of toponym disambiguation on the AIDA CoNLL-YAGO dataset reveals a performance that is in line with state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840211_15",
"text": "Recently, various energy harvesting techniques from ambient environments were proposed as alternative methods for powering sensor nodes, which convert the ambient energy from environments into electricity to power sensor nodes. However, those techniques are not applicable to the wireless sensor networks (WSNs) in the environment with no ambient energy source. To overcome this problem, an RF energy transfer method was proposed to power wireless sensor nodes. However, the RF energy transfer method also has a problem of unfairness among sensor nodes due to the significant difference between their energy harvesting rates according to their positions. In this paper, we propose a medium access control (MAC) protocol for WSNs based on RF energy transfer. The proposed MAC protocol adaptively manages the duty cycle of sensor nodes according to their the amount of harvested energy as well as the contention time of sensor nodes considering fairness among them. Through simulations, we show that our protocol can achieve a high degree of fairness, while maintaining duty cycle of sensor nodes appropriately according to the amount of their harvested energy.",
"title": ""
},
{
"docid": "neg:1840211_16",
"text": "We introduce a learning-based approach to detect repeatable keypoints under drastic imaging changes of weather and lighting conditions to which state-of-the-art keypoint detectors are surprisingly sensitive. We first identify good keypoint candidates in multiple training images taken from the same viewpoint. We then train a regressor to predict a score map whose maxima are those points so that they can be found by simple non-maximum suppression. As there are no standard datasets to test the influence of these kinds of changes, we created our own, which we will make publicly available. We will show that our method significantly outperforms the state-of-the-art methods in such challenging conditions, while still achieving state-of-the-art performance on untrained standard datasets.",
"title": ""
},
{
"docid": "neg:1840211_17",
"text": "Broad host-range mini-Tn7 vectors facilitate integration of single-copy genes into bacterial chromosomes at a neutral, naturally evolved site. Here we present a protocol for employing the mini-Tn7 system in bacteria with single attTn7 sites, using the example Pseudomonas aeruginosa. The procedure involves, first, cloning of the genes of interest into an appropriate mini-Tn7 vector; second, co-transfer of the recombinant mini-Tn7 vector and a helper plasmid encoding the Tn7 site-specific transposition pathway into P. aeruginosa by either transformation or conjugation, followed by selection of insertion-containing strains; third, PCR verification of mini-Tn7 insertions; and last, optional Flp-mediated excision of the antibiotic-resistance selection marker present on the chromosomally integrated mini-Tn7 element. From start to verification of the insertion events, the procedure takes as little as 4 d and is very efficient, yielding several thousand transformants per microgram of input DNA or conjugation mixture. In contrast to existing chromosome integration systems, which are mostly based on species-specific phage or more-or-less randomly integrating transposons, the mini-Tn7 system is characterized by its ready adaptability to various bacterial hosts, its site specificity and its efficiency. Vectors have been developed for gene complementation, construction of gene fusions, regulated gene expression and reporter gene tagging.",
"title": ""
},
{
"docid": "neg:1840211_18",
"text": "With computers and the Internet being essential in everyday life, malware poses serious and evolving threats to their security, making the detection of malware of utmost concern. Accordingly, there have been many researches on intelligent malware detection by applying data mining and machine learning techniques. Though great results have been achieved with these methods, most of them are built on shallow learning architectures. Due to its superior ability in feature learning through multilayer deep architecture, deep learning is starting to be leveraged in industrial and academic research for different applications. In this paper, based on the Windows application programming interface calls extracted from the portable executable files, we study how a deep learning architecture can be designed for intelligent malware detection. We propose a heterogeneous deep learning framework composed of an AutoEncoder stacked up with multilayer restricted Boltzmann machines and a layer of associative memory to detect newly unknown malware. The proposed deep learning model performs as a greedy layer-wise training operation for unsupervised feature learning, followed by supervised parameter fine-tuning. Different from the existing works which only made use of the files with class labels (either malicious or benign) during the training phase, we utilize both labeled and unlabeled file samples to pre-train multiple layers in the heterogeneous deep learning framework from bottom to up for feature learning. A comprehensive experimental study on a real and large file collection from Comodo Cloud Security Center is performed to compare various malware detection approaches. Promising experimental results demonstrate that our proposed deep learning framework can further improve the overall performance in malware detection compared with traditional shallow learning methods, deep learning methods with homogeneous framework, and other existing anti-malware scanners. The proposed heterogeneous deep learning framework can also be readily applied to other malware detection tasks.",
"title": ""
}
] |
1840212 | Herbert Simon ’ s Decision-Making Approach : Investigation of Cognitive Processes in Experts | [
{
"docid": "pos:1840212_0",
"text": "P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this",
"title": ""
}
] | [
{
"docid": "neg:1840212_0",
"text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.",
"title": ""
},
{
"docid": "neg:1840212_1",
"text": "Knowledge graphs have challenged the existing embedding-based approaches for representing their multifacetedness. To address some of the issues, we have investigated some novel approaches that (i) capture the multilingual transitions on different language-specific versions of knowledge, and (ii) encode the commonly existing monolingual knowledge with important relational properties and hierarchies. In addition, we propose the use of our approaches in a wide spectrum of NLP tasks that have not been well explored by related works.",
"title": ""
},
{
"docid": "neg:1840212_2",
"text": "A differential CMOS Logic family that is well suited to automated logic minimization and placement and routing techniques, yet has comparable performance to conventional CMOS, will be described. A CMOS circuit using 10,880 NMOS differential pairs has been developed using this approach.",
"title": ""
},
{
"docid": "neg:1840212_3",
"text": "In this paper we present a Petri net simulator that will be used for modeling of queues. The concept of queue is ubiquitous in the everyday life. Queues are present at the airports, banks, shops etc. Many factors are important in the studying of queues – some of them are: the average waiting time in the queue, the usage of the server, expected length of the queue etc. The paper is organized as follows: first we give a brief overview of the queuing theory. After that – we describe the Petri nets, as a tool for modeling of discrete event systems. We have implemented Petri net simulator that is capable for queue modeling. Finally we are presenting the simulation results for M/M/1 queue and its different properties. We’ve also made comparison between theoretically obtained results and our simulation results.",
"title": ""
},
{
"docid": "neg:1840212_4",
"text": "The preservation of privacy when publishing spatiotemporal traces of mobile humans is a field that is receiving growing attention. However, while more and more services offer personalized privacy options to their users, few trajectory anonymization algorithms are able to handle personalization effectively, without incurring unnecessary information distortion. In this paper, we study the problem of Personalized (K,∆)anonymity, which builds upon the model of (k,δ)-anonymity, while allowing users to have their own individual privacy and service quality requirements. First, we propose efficient modifications to state-of-the-art (k,δ)-anonymization algorithms by introducing a novel technique built upon users’ personalized privacy settings. This way, we avoid over-anonymization and we decrease information distortion. In addition, we utilize datasetaware trajectory segmentation in order to further reduce information distortion. We also study the novel problem of Bounded Personalized (Κ,∆)-anonymity, where the algorithm gets as input an upper bound the information distortion being accepted, and introduce a solution to this problem by editing the (k,δ) requirements of the highest demanding trajectories. Our extensive experimental study over real life trajectories shows the effectiveness of the proposed techniques.",
"title": ""
},
{
"docid": "neg:1840212_5",
"text": "We examined emotional stability, ambition (an aspect of extraversion), and openness as predictors of adaptive performance at work, based on the evolutionary relevance of these traits to human adaptation to novel environments. A meta-analysis on 71 independent samples (N = 7,535) demonstrated that emotional stability and ambition are both related to overall adaptive performance. Openness, however, does not contribute to the prediction of adaptive performance. Analysis of predictor importance suggests that ambition is the most important predictor for proactive forms of adaptive performance, whereas emotional stability is the most important predictor for reactive forms of adaptive performance. Job level (managers vs. employees) moderates the effects of personality traits: Ambition and emotional stability exert stronger effects on adaptive performance for managers as compared to employees.",
"title": ""
},
{
"docid": "neg:1840212_6",
"text": "This Internet research project examined the relationship between consumption of muscle and fitness magazines and/or various indices of pornography and body satisfaction in gay and heterosexual men. Participants (N = 101) were asked to complete body satisfaction questionnaires that addressed maladaptive eating attitudes, the drive for muscularity, and social physique anxiety. Participants also completed scales measuring self-esteem, depression, and socially desirable responding. Finally, respondents were asked about their consumption of muscle and fitness magazines and pornography. Results indicated that viewing and purchasing of muscle and fitness magazines correlated positively with levels of body dissatisfaction for both gay and heterosexual men. Pornography exposure was positively correlated with social physique anxiety for gay men. The limitations of this study and directions for future research are outlined.",
"title": ""
},
{
"docid": "neg:1840212_7",
"text": "Common nonlinear activation functions used in neural networks can cause training difficulties due to the saturation behavior of the activation function, which may hide dependencies that are not visible to vanilla-SGD (using first order gradients only). Gating mechanisms that use softly saturating activation functions to emulate the discrete switching of digital logic circuits are good examples of this. We propose to exploit the injection of appropriate noise so that the gradients may flow easily, even if the noiseless application of the activation function would yield zero gradient. Large noise will dominate the noise-free gradient and allow stochastic gradient descent to explore more. By adding noise only to the problematic parts of the activation function, we allow the optimization procedure to explore the boundary between the degenerate (saturating) and the well-behaved parts of the activation function. We also establish connections to simulated annealing, when the amount of noise is annealed down, making it easier to optimize hard objective functions. We find experimentally that replacing such saturating activation functions by noisy variants helps training in many contexts, yielding state-of-the-art or competitive results on different datasets and task, especially when training seems to be the most difficult, e.g., when curriculum learning is necessary to obtain good results.",
"title": ""
},
{
"docid": "neg:1840212_8",
"text": "Are there important cyclical fluctuations in bond market premiums and, if so, with what macroeconomic aggregates do these premiums vary? We use the methodology of dynamic factor analysis for large datasets to investigate possible empirical linkages between forecastable variation in excess bond returns and macroeconomic fundamentals. We find that “real” and “inflation” factors have important forecasting power for future excess returns on U.S. government bonds, above and beyond the predictive power contained in forward rates and yield spreads. This behavior is ruled out by commonly employed affine term structure models where the forecastability of bond returns and bond yields is completely summarized by the cross-section of yields or forward rates. An important implication of these findings is that the cyclical behavior of estimated risk premia in both returns and long-term yields depends importantly on whether the information in macroeconomic factors is included in forecasts of excess bond returns. Without the macro factors, risk premia appear virtually acyclical, whereas with the estimated factors risk premia have a marked countercyclical component, consistent with theories that imply investors must be compensated for risks associated with macroeconomic activity. ( JEL E0, E4, G10, G12)",
"title": ""
},
{
"docid": "neg:1840212_9",
"text": "The impetus for our study was the contention of both Lynn [Lynn, R. (1991) Race differences in intelligence: A global perspective. Mankind Quarterly, 31, 255–296] and Rushton (Rushton [Rushton, J. P. (1995). Race, evolution and behavior: A life history perspective. New Brunswick, NJ: Transaction; Rushton, J. P. (1997). Race, intelligence, and the brain: The errors and omissions of the revised edition of S.J. Gould’s the mismeasurement of man. Personality and Individual Differences, 23, 169–180; Rushton, J. P. (2000). Race, evolution, and behavior. A life history perspective (3rd edition). Port Huron: Charles Darwin Research Institute] that persons in colder climates tend to have higher IQs than persons in warmer climates. We correlated mean IQ of 129 countries with per capita income, skin color, and winter and summer temperatures, conceptualizing skin color as a multigenerational reflection of climate. The highest correlations were 0.92 (rho= 0.91) for skin color, 0.76 (rho= 0.76) for mean high winter temperature, 0.66 (rho= 0.68) for mean low winter temperature, and 0.63 (rho=0.74) for real gross domestic product per capita. The correlations with population of country controlled for are almost identical. Our findings provide strong support for the observation of Lynn and of Rushton that persons in colder climates tend to have higher IQs. These findings could also be viewed as congruent with, although not providing unequivocal evidence for, the contention that higher intelligence evolves in colder climates. The finding of higher IQ in Eurasians than Africans could also be viewed as congruent with the position of Diamond (1997) that knowledge and resources are transmitted more readily on the Eurasian west–east axis. D 2005 Elsevier Inc. All rights reserved. Both Rushton (1995, 1997, 2000) and Lynn (1991) have pointed out that ethnic groups in colder climates score higher on intelligence tests than ethnic groups in warmer climates. They contend that greater intelligence is needed to adapt to a colder climate so that, over many generations, the more intelligent members of a population are more likely to survive and reproduce. Their temperature and IQ analyses have been descriptive rather than quantitative, however. In the present quantitative study, we predicted a negative correlation between IQ 0160-2896/$ see front matter D 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.intell.2005.04.002 * Corresponding author. E-mail address: donaldtempler@sbcglobal.net (D.I. Templer). and temperature. We hypothesized that correlations would be higher for mean winter temperatures (January in the Northern Hemisphere and July in the Southern Hemisphere) than for mean summer temperatures. Skin color was conceptualized as a variable closely related to temperature. It is viewed by the present authors as a multigenerational reflection of the climates one’s ancestors have lived in for thousands of years. Another reason to predict correlations of IQ with temperature and skin color is the product–moment correlation reported by Beals, Smith, and Dodd (1984) of 0.62 between cranial capacity and distance from the equator. Beals et al. based their finding on 20,000 individual crania 06) 121–139 D.I. Templer, H. Arikawa / Intelligence 34 (2006) 121–139 122 from every continent and representing 122 ethnically distinguishable populations. Jensen (1998) reasoned that natural selection would favor a smaller head with a less spherical shape because of better heat dissipation in hot climates. Natural selection in colder climates would favor a more spherical head to accommodate a larger brain and to have better heat conservation. We used an index of per capita income-real gross domestic product (GDP) per capita to compare the correlations of income with IQ to those of temperature and skin color with IQ. There is a strong rationale for predicting a positive relationship between IQ and real GDP per capita. Common sense dictates that more intelligent populations can achieve greater scientific, technological, and organizational advancement. Furthermore, it is well established that conditions associated with poverty, such as malnutrition and inadequate prenatal/perinatal and other health care, can prevent the attainment of genetic potential. Lynn and Vanhanen (2002) did indeed find positive correlations between adjusted IQ and real GDP per capita of nations throughout the world. Their scatter plots vividly show that countries south of the Sahara Desert have both the lowest real GDPs per capita in the world and the lowest mean IQs in the world (in the 60s and 70s). The real GDP per capita in high-IQ countries is much more variable. For example, China and Korea have very high mean IQs but rather low real GDPs per capita. In this study, we considered only countries (N =129) with primarily indigenous people—those with populations that have persisted since before the voyages of Christopher Columbus. It is acknowledged that there have been many migrations both before and after Columbus. However, the year 1492 has previously been used to define indigenous populations (Cavalli-Sforza, Menzoni, & Piazza, 1994).",
"title": ""
},
{
"docid": "neg:1840212_10",
"text": "Quick Response (QR) codes are two dimensional barcodes that can be used to efficiently store small amount of data. They are increasingly used in all life fields, especially with the wide spread of smart phones which are used as QR code scanners. While QR codes have many advantages that make them very popular, there are several security issues and risks that are associated with them. Running malicious code, stealing users' sensitive information and violating their privacy and identity theft are some typical security risks that a user might be subject to in the background while he/she is just reading the QR code in the foreground. In this paper, a security system for QR codes that guarantees both users and generators security concerns is implemented. The system is backward compatible with current standard used for encoding QR codes. The system is implemented and tested using an Android-based smartphone application. It was found that the system introduces a little overhead in terms of the delay required for integrity verification and content validation.",
"title": ""
},
{
"docid": "neg:1840212_11",
"text": "To perform power augmentation tasks of a robotic exoskeleton, this paper utilizes fuzzy approximation and designed disturbance observers to compensate for the disturbance torques caused by unknown input saturation, fuzzy approximation errors, viscous friction, gravity, and payloads. The proposed adaptive fuzzy control with updated parameters' mechanism and additional torque inputs by using the disturbance observers are exerted into the robotic exoskeleton via feedforward loops to counteract to the disturbances. Through such an approach, the system does not need any requirement of built-in torque sensing units. In order to validate the proposed framework, extensive experiments are conducted on the upper limb exoskeleton using the state feedback and output feedback control to illustrate the performance of the proposed approaches.",
"title": ""
},
{
"docid": "neg:1840212_12",
"text": "Live fish recognition is one of the most crucial elements of fisheries survey applications where the vast amount of data is rapidly acquired. Different from general scenarios, challenges to underwater image recognition are posted by poor image quality, uncontrolled objects and environment, and difficulty in acquiring representative samples. In addition, most existing feature extraction techniques are hindered from automation due to involving human supervision. Toward this end, we propose an underwater fish recognition framework that consists of a fully unsupervised feature learning technique and an error-resilient classifier. Object parts are initialized based on saliency and relaxation labeling to match object parts correctly. A non-rigid part model is then learned based on fitness, separation, and discrimination criteria. For the classifier, an unsupervised clustering approach generates a binary class hierarchy, where each node is a classifier. To exploit information from ambiguous images, the notion of partial classification is introduced to assign coarse labels by optimizing the benefit of indecision made by the classifier. Experiments show that the proposed framework achieves high accuracy on both public and self-collected underwater fish images with high uncertainty and class imbalance.",
"title": ""
},
{
"docid": "neg:1840212_13",
"text": "To be tractable and robust to data noise, existing metric learning algorithms commonly rely on PCA as a pre-processing step. How can we know, however, that PCA, or any other specific dimensionality reduction technique, is the method of choice for the problem at hand? The answer is simple: We cannot! To address this issue, in this paper, we develop a Riemannian framework to jointly learn a mapping performing dimensionality reduction and a metric in the induced space. Our experiments evidence that, while we directly work on high-dimensional features, our approach yields competitive runtimes with and higher accuracy than state-of-the-art metric learning algorithms.",
"title": ""
},
{
"docid": "neg:1840212_14",
"text": "Skin diseases are very common in our daily life. Due to the similar appearance of skin diseases, automatic classification through lesion images is quite a challenging task. In this paper, a novel multi-classification method based on convolutional neural network (CNN) is proposed for dermoscopy images. A CNN network with nested residual structure is designed first, which can learn more information than the original residual structure. Then, the designed network are trained through transfer learning. With the trained network, 6 kinds of lesion diseases are classified, including nevus, seborrheic keratosis, psoriasis, seborrheic dermatitis, eczema and basal cell carcinoma. The experiments are conducted on six-classification and two-classification tasks, and with the accuracies of 65.8% and 90% respectively, our method greatly outperforms other 4 state-of-the-art networks and the average of 149 professional dermatologists.",
"title": ""
},
{
"docid": "neg:1840212_15",
"text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.",
"title": ""
},
{
"docid": "neg:1840212_16",
"text": "We simulate the growth of a benign avascular tumour embedded in normal tissue, including cell sorting that occurs between tumour and normal cells, due to the variation of adhesion between diierent cell types. The simulation uses the Potts Model, an energy minimisation method. Trial random movements of cell walls are checked to see if they reduce the adhesion energy of the tissue. These trials are then accepted with Boltzmann weighted probability. The simulated tumour initially grows exponentially, then forms three concentric shells as the nutrient level supplied to the core by diiusion decreases: the outer shell consists of live proliferating cells, the middle of quiescent cells and the centre is a necrotic core, where the nutrient concentration is below the critical level that sustains life. The growth rate of the tumour decreases at the onset of shell formation in agreement with experimental observation. The tumour eventually approaches a steady state, where the increase in volume due to the growth of the proliferating cells equals the loss of volume due to the disintegration of cells in the necrotic core. The nal thickness of the shells also agrees with experiment.",
"title": ""
},
{
"docid": "neg:1840212_17",
"text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.",
"title": ""
},
{
"docid": "neg:1840212_18",
"text": "The proliferation of sensors and mobile devices and their connectedness to the network have given rise to numerous types of situation monitoring applications. Data Stream Management Systems (DSMSs) have been proposed to address the data processing needs of such applications that require collection of high-speed data, computing results on-the-fly, and taking actions in real-time. Although a lot of work appears in the area of DSMS, not much has been done in multilevel secure (MLS) DSMS making the technology unsuitable for highly sensitive applications, such as battlefield monitoring. An MLS–DSMS should ensure the absence of illegal information flow in a DSMS and more importantly provide the performance needed to handle continuous queries. We illustrate why the traditional DSMSs cannot be used for processing multilevel secure continuous queries and discuss various DSMS architectures for processing such queries. We implement one such architecture and demonstrate how it processes continuous queries. In order to provide better quality of service and memory usage in a DSMS, we show how continuous queries submitted by various users can be shared. We provide experimental evaluations to demonstrate the performance benefits achieved through query sharing.",
"title": ""
}
] |
Subsets and Splits